00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 168 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3669 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.013 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.038 Fetching changes from the remote Git repository 00:00:00.046 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.063 Using shallow fetch with depth 1 00:00:00.063 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.063 > git --version # timeout=10 00:00:00.087 > git --version # 'git version 2.39.2' 00:00:00.087 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.114 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.114 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.262 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.276 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.290 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.290 > git config core.sparsecheckout # timeout=10 00:00:02.302 > git read-tree -mu HEAD # timeout=10 00:00:02.319 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.347 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.348 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.590 [Pipeline] Start of Pipeline 00:00:02.603 [Pipeline] library 00:00:02.605 Loading library shm_lib@master 00:00:02.605 Library shm_lib@master is cached. Copying from home. 00:00:02.617 [Pipeline] node 00:00:02.627 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.628 [Pipeline] { 00:00:02.635 [Pipeline] catchError 00:00:02.636 [Pipeline] { 00:00:02.645 [Pipeline] wrap 00:00:02.650 [Pipeline] { 00:00:02.655 [Pipeline] stage 00:00:02.656 [Pipeline] { (Prologue) 00:00:02.667 [Pipeline] echo 00:00:02.668 Node: VM-host-WFP7 00:00:02.671 [Pipeline] cleanWs 00:00:02.679 [WS-CLEANUP] Deleting project workspace... 00:00:02.679 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.684 [WS-CLEANUP] done 00:00:02.858 [Pipeline] setCustomBuildProperty 00:00:02.974 [Pipeline] httpRequest 00:00:03.287 [Pipeline] echo 00:00:03.289 Sorcerer 10.211.164.20 is alive 00:00:03.298 [Pipeline] retry 00:00:03.300 [Pipeline] { 00:00:03.313 [Pipeline] httpRequest 00:00:03.318 HttpMethod: GET 00:00:03.319 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.319 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.323 Response Code: HTTP/1.1 200 OK 00:00:03.323 Success: Status code 200 is in the accepted range: 200,404 00:00:03.323 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.466 [Pipeline] } 00:00:03.479 [Pipeline] // retry 00:00:03.484 [Pipeline] sh 00:00:03.762 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.776 [Pipeline] httpRequest 00:00:04.106 [Pipeline] echo 00:00:04.108 Sorcerer 10.211.164.20 is alive 00:00:04.114 [Pipeline] retry 00:00:04.115 [Pipeline] { 00:00:04.128 [Pipeline] httpRequest 00:00:04.132 HttpMethod: GET 00:00:04.133 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.133 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.134 Response Code: HTTP/1.1 200 OK 00:00:04.134 Success: Status code 200 is in the accepted range: 200,404 00:00:04.135 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:23.473 [Pipeline] } 00:00:23.491 [Pipeline] // retry 00:00:23.499 [Pipeline] sh 00:00:23.781 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.328 [Pipeline] sh 00:00:26.610 + git -C spdk log --oneline -n5 00:00:26.610 b18e1bd62 version: v24.09.1-pre 00:00:26.610 19524ad45 version: v24.09 00:00:26.610 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:26.610 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:26.610 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:26.629 [Pipeline] withCredentials 00:00:26.639 > git --version # timeout=10 00:00:26.649 > git --version # 'git version 2.39.2' 00:00:26.694 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:26.696 [Pipeline] { 00:00:26.704 [Pipeline] retry 00:00:26.706 [Pipeline] { 00:00:26.721 [Pipeline] sh 00:00:27.058 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:27.330 [Pipeline] } 00:00:27.350 [Pipeline] // retry 00:00:27.356 [Pipeline] } 00:00:27.372 [Pipeline] // withCredentials 00:00:27.382 [Pipeline] httpRequest 00:00:27.756 [Pipeline] echo 00:00:27.758 Sorcerer 10.211.164.20 is alive 00:00:27.768 [Pipeline] retry 00:00:27.770 [Pipeline] { 00:00:27.786 [Pipeline] httpRequest 00:00:27.791 HttpMethod: GET 00:00:27.792 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.792 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:27.813 Response Code: HTTP/1.1 200 OK 00:00:27.814 Success: Status code 200 is in the accepted range: 200,404 00:00:27.814 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:45.383 [Pipeline] } 00:01:45.402 [Pipeline] // retry 00:01:45.410 [Pipeline] sh 00:01:45.695 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:47.087 [Pipeline] sh 00:01:47.370 + git -C dpdk log --oneline -n5 00:01:47.370 eeb0605f11 version: 23.11.0 00:01:47.370 238778122a doc: update release notes for 23.11 00:01:47.370 46aa6b3cfc doc: fix description of RSS features 00:01:47.370 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:47.370 7e421ae345 devtools: support skipping forbid rule check 00:01:47.389 [Pipeline] writeFile 00:01:47.404 [Pipeline] sh 00:01:47.688 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:47.700 [Pipeline] sh 00:01:47.983 + cat autorun-spdk.conf 00:01:47.983 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.983 SPDK_RUN_ASAN=1 00:01:47.983 SPDK_RUN_UBSAN=1 00:01:47.983 SPDK_TEST_RAID=1 00:01:47.983 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:47.983 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:47.983 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.990 RUN_NIGHTLY=1 00:01:47.992 [Pipeline] } 00:01:48.007 [Pipeline] // stage 00:01:48.023 [Pipeline] stage 00:01:48.026 [Pipeline] { (Run VM) 00:01:48.040 [Pipeline] sh 00:01:48.324 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:48.324 + echo 'Start stage prepare_nvme.sh' 00:01:48.324 Start stage prepare_nvme.sh 00:01:48.324 + [[ -n 5 ]] 00:01:48.324 + disk_prefix=ex5 00:01:48.324 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:48.324 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:48.324 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:48.324 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.324 ++ SPDK_RUN_ASAN=1 00:01:48.324 ++ SPDK_RUN_UBSAN=1 00:01:48.324 ++ SPDK_TEST_RAID=1 00:01:48.324 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:48.324 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:48.324 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.324 ++ RUN_NIGHTLY=1 00:01:48.324 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:48.324 + nvme_files=() 00:01:48.324 + declare -A nvme_files 00:01:48.324 + backend_dir=/var/lib/libvirt/images/backends 00:01:48.324 + nvme_files['nvme.img']=5G 00:01:48.324 + nvme_files['nvme-cmb.img']=5G 00:01:48.324 + nvme_files['nvme-multi0.img']=4G 00:01:48.324 + nvme_files['nvme-multi1.img']=4G 00:01:48.324 + nvme_files['nvme-multi2.img']=4G 00:01:48.324 + nvme_files['nvme-openstack.img']=8G 00:01:48.324 + nvme_files['nvme-zns.img']=5G 00:01:48.324 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:48.324 + (( SPDK_TEST_FTL == 1 )) 00:01:48.324 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:48.324 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:48.324 + for nvme in "${!nvme_files[@]}" 00:01:48.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:48.324 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.324 + for nvme in "${!nvme_files[@]}" 00:01:48.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:48.324 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.324 + for nvme in "${!nvme_files[@]}" 00:01:48.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:48.324 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:48.324 + for nvme in "${!nvme_files[@]}" 00:01:48.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:48.324 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.324 + for nvme in "${!nvme_files[@]}" 00:01:48.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:48.324 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.324 + for nvme in "${!nvme_files[@]}" 00:01:48.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:48.324 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:48.324 + for nvme in "${!nvme_files[@]}" 00:01:48.324 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:48.583 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:48.583 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:48.583 + echo 'End stage prepare_nvme.sh' 00:01:48.583 End stage prepare_nvme.sh 00:01:48.595 [Pipeline] sh 00:01:48.880 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:48.880 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:48.880 00:01:48.880 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:48.880 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:48.880 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:48.880 HELP=0 00:01:48.880 DRY_RUN=0 00:01:48.880 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:48.880 NVME_DISKS_TYPE=nvme,nvme, 00:01:48.880 NVME_AUTO_CREATE=0 00:01:48.880 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:48.880 NVME_CMB=,, 00:01:48.880 NVME_PMR=,, 00:01:48.880 NVME_ZNS=,, 00:01:48.880 NVME_MS=,, 00:01:48.880 NVME_FDP=,, 00:01:48.880 SPDK_VAGRANT_DISTRO=fedora39 00:01:48.880 SPDK_VAGRANT_VMCPU=10 00:01:48.880 SPDK_VAGRANT_VMRAM=12288 00:01:48.880 SPDK_VAGRANT_PROVIDER=libvirt 00:01:48.880 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:48.880 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:48.880 SPDK_OPENSTACK_NETWORK=0 00:01:48.880 VAGRANT_PACKAGE_BOX=0 00:01:48.880 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:48.880 FORCE_DISTRO=true 00:01:48.880 VAGRANT_BOX_VERSION= 00:01:48.880 EXTRA_VAGRANTFILES= 00:01:48.880 NIC_MODEL=virtio 00:01:48.880 00:01:48.880 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:48.880 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:51.417 Bringing machine 'default' up with 'libvirt' provider... 00:01:51.676 ==> default: Creating image (snapshot of base box volume). 00:01:51.676 ==> default: Creating domain with the following settings... 00:01:51.676 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732652084_31580c9dde0e739943a9 00:01:51.676 ==> default: -- Domain type: kvm 00:01:51.676 ==> default: -- Cpus: 10 00:01:51.676 ==> default: -- Feature: acpi 00:01:51.676 ==> default: -- Feature: apic 00:01:51.676 ==> default: -- Feature: pae 00:01:51.676 ==> default: -- Memory: 12288M 00:01:51.676 ==> default: -- Memory Backing: hugepages: 00:01:51.676 ==> default: -- Management MAC: 00:01:51.676 ==> default: -- Loader: 00:01:51.676 ==> default: -- Nvram: 00:01:51.676 ==> default: -- Base box: spdk/fedora39 00:01:51.676 ==> default: -- Storage pool: default 00:01:51.676 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732652084_31580c9dde0e739943a9.img (20G) 00:01:51.676 ==> default: -- Volume Cache: default 00:01:51.676 ==> default: -- Kernel: 00:01:51.676 ==> default: -- Initrd: 00:01:51.676 ==> default: -- Graphics Type: vnc 00:01:51.676 ==> default: -- Graphics Port: -1 00:01:51.676 ==> default: -- Graphics IP: 127.0.0.1 00:01:51.676 ==> default: -- Graphics Password: Not defined 00:01:51.676 ==> default: -- Video Type: cirrus 00:01:51.676 ==> default: -- Video VRAM: 9216 00:01:51.676 ==> default: -- Sound Type: 00:01:51.676 ==> default: -- Keymap: en-us 00:01:51.676 ==> default: -- TPM Path: 00:01:51.676 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:51.676 ==> default: -- Command line args: 00:01:51.676 ==> default: -> value=-device, 00:01:51.676 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:51.676 ==> default: -> value=-drive, 00:01:51.676 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:51.676 ==> default: -> value=-device, 00:01:51.677 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.677 ==> default: -> value=-device, 00:01:51.677 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:51.677 ==> default: -> value=-drive, 00:01:51.677 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:51.677 ==> default: -> value=-device, 00:01:51.677 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.677 ==> default: -> value=-drive, 00:01:51.677 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:51.677 ==> default: -> value=-device, 00:01:51.677 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.677 ==> default: -> value=-drive, 00:01:51.677 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:51.677 ==> default: -> value=-device, 00:01:51.677 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:51.936 ==> default: Creating shared folders metadata... 00:01:51.936 ==> default: Starting domain. 00:01:53.317 ==> default: Waiting for domain to get an IP address... 00:02:11.410 ==> default: Waiting for SSH to become available... 00:02:11.410 ==> default: Configuring and enabling network interfaces... 00:02:16.708 default: SSH address: 192.168.121.66:22 00:02:16.708 default: SSH username: vagrant 00:02:16.708 default: SSH auth method: private key 00:02:19.244 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:27.386 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:33.975 ==> default: Mounting SSHFS shared folder... 00:02:35.355 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:35.355 ==> default: Checking Mount.. 00:02:37.264 ==> default: Folder Successfully Mounted! 00:02:37.264 ==> default: Running provisioner: file... 00:02:38.205 default: ~/.gitconfig => .gitconfig 00:02:38.463 00:02:38.463 SUCCESS! 00:02:38.463 00:02:38.463 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:38.463 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:38.463 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:38.463 00:02:38.471 [Pipeline] } 00:02:38.484 [Pipeline] // stage 00:02:38.492 [Pipeline] dir 00:02:38.493 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:38.494 [Pipeline] { 00:02:38.504 [Pipeline] catchError 00:02:38.506 [Pipeline] { 00:02:38.516 [Pipeline] sh 00:02:38.798 + vagrant ssh-config --host vagrant 00:02:38.798 + sed -ne /^Host/,$p 00:02:38.798 + tee ssh_conf 00:02:41.338 Host vagrant 00:02:41.338 HostName 192.168.121.66 00:02:41.338 User vagrant 00:02:41.338 Port 22 00:02:41.338 UserKnownHostsFile /dev/null 00:02:41.338 StrictHostKeyChecking no 00:02:41.338 PasswordAuthentication no 00:02:41.338 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:41.338 IdentitiesOnly yes 00:02:41.338 LogLevel FATAL 00:02:41.338 ForwardAgent yes 00:02:41.338 ForwardX11 yes 00:02:41.338 00:02:41.352 [Pipeline] withEnv 00:02:41.353 [Pipeline] { 00:02:41.366 [Pipeline] sh 00:02:41.648 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:41.648 source /etc/os-release 00:02:41.648 [[ -e /image.version ]] && img=$(< /image.version) 00:02:41.648 # Minimal, systemd-like check. 00:02:41.648 if [[ -e /.dockerenv ]]; then 00:02:41.648 # Clear garbage from the node's name: 00:02:41.648 # agt-er_autotest_547-896 -> autotest_547-896 00:02:41.648 # $HOSTNAME is the actual container id 00:02:41.648 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:41.648 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:41.648 # We can assume this is a mount from a host where container is running, 00:02:41.648 # so fetch its hostname to easily identify the target swarm worker. 00:02:41.648 container="$(< /etc/hostname) ($agent)" 00:02:41.648 else 00:02:41.648 # Fallback 00:02:41.648 container=$agent 00:02:41.648 fi 00:02:41.648 fi 00:02:41.648 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:41.648 00:02:41.921 [Pipeline] } 00:02:41.937 [Pipeline] // withEnv 00:02:41.944 [Pipeline] setCustomBuildProperty 00:02:41.957 [Pipeline] stage 00:02:41.959 [Pipeline] { (Tests) 00:02:41.976 [Pipeline] sh 00:02:42.260 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:42.535 [Pipeline] sh 00:02:42.818 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:43.093 [Pipeline] timeout 00:02:43.093 Timeout set to expire in 1 hr 30 min 00:02:43.095 [Pipeline] { 00:02:43.110 [Pipeline] sh 00:02:43.394 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:43.966 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:43.978 [Pipeline] sh 00:02:44.258 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:44.531 [Pipeline] sh 00:02:44.811 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:45.088 [Pipeline] sh 00:02:45.453 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:45.713 ++ readlink -f spdk_repo 00:02:45.713 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:45.713 + [[ -n /home/vagrant/spdk_repo ]] 00:02:45.713 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:45.713 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:45.713 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:45.713 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:45.713 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:45.713 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:45.713 + cd /home/vagrant/spdk_repo 00:02:45.713 + source /etc/os-release 00:02:45.713 ++ NAME='Fedora Linux' 00:02:45.713 ++ VERSION='39 (Cloud Edition)' 00:02:45.713 ++ ID=fedora 00:02:45.713 ++ VERSION_ID=39 00:02:45.713 ++ VERSION_CODENAME= 00:02:45.713 ++ PLATFORM_ID=platform:f39 00:02:45.713 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:45.713 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:45.713 ++ LOGO=fedora-logo-icon 00:02:45.713 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:45.713 ++ HOME_URL=https://fedoraproject.org/ 00:02:45.713 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:45.713 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:45.713 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:45.713 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:45.713 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:45.713 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:45.713 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:45.713 ++ SUPPORT_END=2024-11-12 00:02:45.713 ++ VARIANT='Cloud Edition' 00:02:45.713 ++ VARIANT_ID=cloud 00:02:45.713 + uname -a 00:02:45.713 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:45.713 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:46.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:46.282 Hugepages 00:02:46.282 node hugesize free / total 00:02:46.282 node0 1048576kB 0 / 0 00:02:46.282 node0 2048kB 0 / 0 00:02:46.282 00:02:46.282 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:46.282 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:46.282 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:46.282 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:46.282 + rm -f /tmp/spdk-ld-path 00:02:46.282 + source autorun-spdk.conf 00:02:46.282 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:46.282 ++ SPDK_RUN_ASAN=1 00:02:46.282 ++ SPDK_RUN_UBSAN=1 00:02:46.282 ++ SPDK_TEST_RAID=1 00:02:46.282 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:46.282 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:46.282 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:46.282 ++ RUN_NIGHTLY=1 00:02:46.282 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:46.282 + [[ -n '' ]] 00:02:46.282 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:46.282 + for M in /var/spdk/build-*-manifest.txt 00:02:46.282 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:46.282 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:46.282 + for M in /var/spdk/build-*-manifest.txt 00:02:46.282 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:46.282 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:46.282 + for M in /var/spdk/build-*-manifest.txt 00:02:46.282 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:46.282 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:46.282 ++ uname 00:02:46.282 + [[ Linux == \L\i\n\u\x ]] 00:02:46.282 + sudo dmesg -T 00:02:46.282 + sudo dmesg --clear 00:02:46.542 + dmesg_pid=6164 00:02:46.542 + [[ Fedora Linux == FreeBSD ]] 00:02:46.542 + sudo dmesg -Tw 00:02:46.542 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:46.542 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:46.542 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:46.542 + [[ -x /usr/src/fio-static/fio ]] 00:02:46.542 + export FIO_BIN=/usr/src/fio-static/fio 00:02:46.542 + FIO_BIN=/usr/src/fio-static/fio 00:02:46.542 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:46.542 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:46.542 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:46.542 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:46.542 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:46.542 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:46.542 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:46.542 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:46.542 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:46.542 Test configuration: 00:02:46.542 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:46.542 SPDK_RUN_ASAN=1 00:02:46.542 SPDK_RUN_UBSAN=1 00:02:46.542 SPDK_TEST_RAID=1 00:02:46.542 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:46.542 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:46.543 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:46.543 RUN_NIGHTLY=1 20:15:39 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:46.543 20:15:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:46.543 20:15:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:46.543 20:15:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:46.543 20:15:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:46.543 20:15:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:46.543 20:15:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.543 20:15:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.543 20:15:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.543 20:15:39 -- paths/export.sh@5 -- $ export PATH 00:02:46.543 20:15:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.543 20:15:39 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:46.543 20:15:39 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:46.543 20:15:39 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732652139.XXXXXX 00:02:46.543 20:15:39 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732652139.s88dBC 00:02:46.543 20:15:39 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:46.543 20:15:39 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:46.543 20:15:39 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:46.543 20:15:39 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:46.543 20:15:39 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:46.543 20:15:39 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:46.543 20:15:39 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:46.543 20:15:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:46.543 20:15:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.543 20:15:40 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:46.543 20:15:40 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:46.543 20:15:40 -- pm/common@17 -- $ local monitor 00:02:46.543 20:15:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.543 20:15:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.543 20:15:40 -- pm/common@25 -- $ sleep 1 00:02:46.543 20:15:40 -- pm/common@21 -- $ date +%s 00:02:46.543 20:15:40 -- pm/common@21 -- $ date +%s 00:02:46.543 20:15:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732652140 00:02:46.543 20:15:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732652140 00:02:46.543 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732652140_collect-cpu-load.pm.log 00:02:46.543 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732652140_collect-vmstat.pm.log 00:02:47.481 20:15:41 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:47.481 20:15:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:47.481 20:15:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:47.481 20:15:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:47.481 20:15:41 -- spdk/autobuild.sh@16 -- $ date -u 00:02:47.481 Tue Nov 26 08:15:41 PM UTC 2024 00:02:47.481 20:15:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:47.741 v24.09-1-gb18e1bd62 00:02:47.741 20:15:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:47.741 20:15:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:47.741 20:15:41 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:47.741 20:15:41 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:47.741 20:15:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.741 ************************************ 00:02:47.741 START TEST asan 00:02:47.741 ************************************ 00:02:47.741 using asan 00:02:47.741 20:15:41 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:47.741 00:02:47.741 real 0m0.000s 00:02:47.741 user 0m0.000s 00:02:47.741 sys 0m0.000s 00:02:47.741 20:15:41 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:47.741 20:15:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:47.741 ************************************ 00:02:47.741 END TEST asan 00:02:47.741 ************************************ 00:02:47.741 20:15:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:47.741 20:15:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:47.741 20:15:41 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:47.741 20:15:41 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:47.741 20:15:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.741 ************************************ 00:02:47.741 START TEST ubsan 00:02:47.741 ************************************ 00:02:47.741 using ubsan 00:02:47.741 20:15:41 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:47.741 00:02:47.741 real 0m0.001s 00:02:47.741 user 0m0.000s 00:02:47.741 sys 0m0.000s 00:02:47.741 20:15:41 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:47.741 20:15:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:47.741 ************************************ 00:02:47.741 END TEST ubsan 00:02:47.741 ************************************ 00:02:47.741 20:15:41 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:47.741 20:15:41 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:47.742 20:15:41 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:47.742 20:15:41 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:47.742 20:15:41 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:47.742 20:15:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.742 ************************************ 00:02:47.742 START TEST build_native_dpdk 00:02:47.742 ************************************ 00:02:47.742 20:15:41 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:47.742 eeb0605f11 version: 23.11.0 00:02:47.742 238778122a doc: update release notes for 23.11 00:02:47.742 46aa6b3cfc doc: fix description of RSS features 00:02:47.742 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:47.742 7e421ae345 devtools: support skipping forbid rule check 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:47.742 patching file config/rte_config.h 00:02:47.742 Hunk #1 succeeded at 60 (offset 1 line). 00:02:47.742 20:15:41 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:47.742 20:15:41 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:47.743 20:15:41 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:47.743 20:15:41 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:48.002 patching file lib/pcapng/rte_pcapng.c 00:02:48.002 20:15:41 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:48.002 20:15:41 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:48.002 20:15:41 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:48.002 20:15:41 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:48.002 20:15:41 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:48.002 20:15:41 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:48.002 20:15:41 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:53.279 The Meson build system 00:02:53.279 Version: 1.5.0 00:02:53.279 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:53.279 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:53.279 Build type: native build 00:02:53.279 Program cat found: YES (/usr/bin/cat) 00:02:53.279 Project name: DPDK 00:02:53.279 Project version: 23.11.0 00:02:53.279 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:53.279 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:53.279 Host machine cpu family: x86_64 00:02:53.279 Host machine cpu: x86_64 00:02:53.279 Message: ## Building in Developer Mode ## 00:02:53.279 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:53.279 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:53.279 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.279 Program python3 found: YES (/usr/bin/python3) 00:02:53.279 Program cat found: YES (/usr/bin/cat) 00:02:53.279 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:53.279 Compiler for C supports arguments -march=native: YES 00:02:53.279 Checking for size of "void *" : 8 00:02:53.279 Checking for size of "void *" : 8 (cached) 00:02:53.279 Library m found: YES 00:02:53.279 Library numa found: YES 00:02:53.279 Has header "numaif.h" : YES 00:02:53.279 Library fdt found: NO 00:02:53.279 Library execinfo found: NO 00:02:53.279 Has header "execinfo.h" : YES 00:02:53.279 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:53.279 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.279 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.279 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.279 Run-time dependency openssl found: YES 3.1.1 00:02:53.279 Run-time dependency libpcap found: YES 1.10.4 00:02:53.279 Has header "pcap.h" with dependency libpcap: YES 00:02:53.279 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.279 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.279 Compiler for C supports arguments -Wformat: YES 00:02:53.279 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:53.279 Compiler for C supports arguments -Wformat-security: NO 00:02:53.279 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.279 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.279 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.279 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.279 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.279 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.279 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.279 Compiler for C supports arguments -Wundef: YES 00:02:53.279 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.279 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.279 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:53.279 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.279 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:53.279 Program objdump found: YES (/usr/bin/objdump) 00:02:53.279 Compiler for C supports arguments -mavx512f: YES 00:02:53.279 Checking if "AVX512 checking" compiles: YES 00:02:53.279 Fetching value of define "__SSE4_2__" : 1 00:02:53.279 Fetching value of define "__AES__" : 1 00:02:53.279 Fetching value of define "__AVX__" : 1 00:02:53.279 Fetching value of define "__AVX2__" : 1 00:02:53.279 Fetching value of define "__AVX512BW__" : 1 00:02:53.279 Fetching value of define "__AVX512CD__" : 1 00:02:53.279 Fetching value of define "__AVX512DQ__" : 1 00:02:53.279 Fetching value of define "__AVX512F__" : 1 00:02:53.279 Fetching value of define "__AVX512VL__" : 1 00:02:53.279 Fetching value of define "__PCLMUL__" : 1 00:02:53.279 Fetching value of define "__RDRND__" : 1 00:02:53.279 Fetching value of define "__RDSEED__" : 1 00:02:53.279 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:53.279 Fetching value of define "__znver1__" : (undefined) 00:02:53.279 Fetching value of define "__znver2__" : (undefined) 00:02:53.279 Fetching value of define "__znver3__" : (undefined) 00:02:53.279 Fetching value of define "__znver4__" : (undefined) 00:02:53.279 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:53.279 Message: lib/log: Defining dependency "log" 00:02:53.279 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.279 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.279 Checking for function "getentropy" : NO 00:02:53.279 Message: lib/eal: Defining dependency "eal" 00:02:53.279 Message: lib/ring: Defining dependency "ring" 00:02:53.279 Message: lib/rcu: Defining dependency "rcu" 00:02:53.279 Message: lib/mempool: Defining dependency "mempool" 00:02:53.279 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.279 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.279 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.279 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.279 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.279 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:53.279 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:53.279 Compiler for C supports arguments -mpclmul: YES 00:02:53.279 Compiler for C supports arguments -maes: YES 00:02:53.279 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:53.279 Compiler for C supports arguments -mavx512bw: YES 00:02:53.279 Compiler for C supports arguments -mavx512dq: YES 00:02:53.279 Compiler for C supports arguments -mavx512vl: YES 00:02:53.279 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.279 Compiler for C supports arguments -mavx2: YES 00:02:53.279 Compiler for C supports arguments -mavx: YES 00:02:53.279 Message: lib/net: Defining dependency "net" 00:02:53.279 Message: lib/meter: Defining dependency "meter" 00:02:53.279 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.279 Message: lib/pci: Defining dependency "pci" 00:02:53.279 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.279 Message: lib/metrics: Defining dependency "metrics" 00:02:53.279 Message: lib/hash: Defining dependency "hash" 00:02:53.279 Message: lib/timer: Defining dependency "timer" 00:02:53.279 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.279 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:53.279 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:53.279 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.279 Message: lib/acl: Defining dependency "acl" 00:02:53.279 Message: lib/bbdev: Defining dependency "bbdev" 00:02:53.280 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:53.280 Run-time dependency libelf found: YES 0.191 00:02:53.280 Message: lib/bpf: Defining dependency "bpf" 00:02:53.280 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:53.280 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.280 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.280 Message: lib/distributor: Defining dependency "distributor" 00:02:53.280 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.280 Message: lib/efd: Defining dependency "efd" 00:02:53.280 Message: lib/eventdev: Defining dependency "eventdev" 00:02:53.280 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:53.280 Message: lib/gpudev: Defining dependency "gpudev" 00:02:53.280 Message: lib/gro: Defining dependency "gro" 00:02:53.280 Message: lib/gso: Defining dependency "gso" 00:02:53.280 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:53.280 Message: lib/jobstats: Defining dependency "jobstats" 00:02:53.280 Message: lib/latencystats: Defining dependency "latencystats" 00:02:53.280 Message: lib/lpm: Defining dependency "lpm" 00:02:53.280 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.280 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.280 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:53.280 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:53.280 Message: lib/member: Defining dependency "member" 00:02:53.280 Message: lib/pcapng: Defining dependency "pcapng" 00:02:53.280 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:53.280 Message: lib/power: Defining dependency "power" 00:02:53.280 Message: lib/rawdev: Defining dependency "rawdev" 00:02:53.280 Message: lib/regexdev: Defining dependency "regexdev" 00:02:53.280 Message: lib/mldev: Defining dependency "mldev" 00:02:53.280 Message: lib/rib: Defining dependency "rib" 00:02:53.280 Message: lib/reorder: Defining dependency "reorder" 00:02:53.280 Message: lib/sched: Defining dependency "sched" 00:02:53.280 Message: lib/security: Defining dependency "security" 00:02:53.280 Message: lib/stack: Defining dependency "stack" 00:02:53.280 Has header "linux/userfaultfd.h" : YES 00:02:53.280 Has header "linux/vduse.h" : YES 00:02:53.280 Message: lib/vhost: Defining dependency "vhost" 00:02:53.280 Message: lib/ipsec: Defining dependency "ipsec" 00:02:53.280 Message: lib/pdcp: Defining dependency "pdcp" 00:02:53.280 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.280 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.280 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.280 Message: lib/fib: Defining dependency "fib" 00:02:53.280 Message: lib/port: Defining dependency "port" 00:02:53.280 Message: lib/pdump: Defining dependency "pdump" 00:02:53.280 Message: lib/table: Defining dependency "table" 00:02:53.280 Message: lib/pipeline: Defining dependency "pipeline" 00:02:53.280 Message: lib/graph: Defining dependency "graph" 00:02:53.280 Message: lib/node: Defining dependency "node" 00:02:53.280 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:53.280 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:53.280 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:55.201 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:55.201 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:55.201 Compiler for C supports arguments -Wno-unused-value: YES 00:02:55.201 Compiler for C supports arguments -Wno-format: YES 00:02:55.201 Compiler for C supports arguments -Wno-format-security: YES 00:02:55.201 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:55.201 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:55.201 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:55.201 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:55.201 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:55.201 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:55.201 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:55.201 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:55.201 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:55.201 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:55.201 Has header "sys/epoll.h" : YES 00:02:55.201 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:55.201 Configuring doxy-api-html.conf using configuration 00:02:55.201 Configuring doxy-api-man.conf using configuration 00:02:55.201 Program mandb found: YES (/usr/bin/mandb) 00:02:55.201 Program sphinx-build found: NO 00:02:55.201 Configuring rte_build_config.h using configuration 00:02:55.201 Message: 00:02:55.201 ================= 00:02:55.201 Applications Enabled 00:02:55.201 ================= 00:02:55.201 00:02:55.201 apps: 00:02:55.201 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:55.201 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:55.201 test-pmd, test-regex, test-sad, test-security-perf, 00:02:55.201 00:02:55.201 Message: 00:02:55.201 ================= 00:02:55.201 Libraries Enabled 00:02:55.201 ================= 00:02:55.201 00:02:55.201 libs: 00:02:55.201 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:55.201 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:55.201 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:55.201 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:55.201 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:55.201 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:55.201 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:55.201 00:02:55.201 00:02:55.201 Message: 00:02:55.201 =============== 00:02:55.201 Drivers Enabled 00:02:55.201 =============== 00:02:55.201 00:02:55.201 common: 00:02:55.201 00:02:55.201 bus: 00:02:55.201 pci, vdev, 00:02:55.201 mempool: 00:02:55.201 ring, 00:02:55.201 dma: 00:02:55.201 00:02:55.201 net: 00:02:55.201 i40e, 00:02:55.201 raw: 00:02:55.201 00:02:55.201 crypto: 00:02:55.201 00:02:55.201 compress: 00:02:55.201 00:02:55.201 regex: 00:02:55.201 00:02:55.201 ml: 00:02:55.201 00:02:55.201 vdpa: 00:02:55.201 00:02:55.201 event: 00:02:55.201 00:02:55.201 baseband: 00:02:55.201 00:02:55.201 gpu: 00:02:55.201 00:02:55.201 00:02:55.201 Message: 00:02:55.201 ================= 00:02:55.201 Content Skipped 00:02:55.201 ================= 00:02:55.201 00:02:55.201 apps: 00:02:55.201 00:02:55.201 libs: 00:02:55.201 00:02:55.201 drivers: 00:02:55.201 common/cpt: not in enabled drivers build config 00:02:55.201 common/dpaax: not in enabled drivers build config 00:02:55.201 common/iavf: not in enabled drivers build config 00:02:55.201 common/idpf: not in enabled drivers build config 00:02:55.201 common/mvep: not in enabled drivers build config 00:02:55.201 common/octeontx: not in enabled drivers build config 00:02:55.201 bus/auxiliary: not in enabled drivers build config 00:02:55.201 bus/cdx: not in enabled drivers build config 00:02:55.201 bus/dpaa: not in enabled drivers build config 00:02:55.201 bus/fslmc: not in enabled drivers build config 00:02:55.201 bus/ifpga: not in enabled drivers build config 00:02:55.201 bus/platform: not in enabled drivers build config 00:02:55.201 bus/vmbus: not in enabled drivers build config 00:02:55.201 common/cnxk: not in enabled drivers build config 00:02:55.201 common/mlx5: not in enabled drivers build config 00:02:55.201 common/nfp: not in enabled drivers build config 00:02:55.201 common/qat: not in enabled drivers build config 00:02:55.201 common/sfc_efx: not in enabled drivers build config 00:02:55.201 mempool/bucket: not in enabled drivers build config 00:02:55.201 mempool/cnxk: not in enabled drivers build config 00:02:55.201 mempool/dpaa: not in enabled drivers build config 00:02:55.201 mempool/dpaa2: not in enabled drivers build config 00:02:55.201 mempool/octeontx: not in enabled drivers build config 00:02:55.201 mempool/stack: not in enabled drivers build config 00:02:55.201 dma/cnxk: not in enabled drivers build config 00:02:55.201 dma/dpaa: not in enabled drivers build config 00:02:55.201 dma/dpaa2: not in enabled drivers build config 00:02:55.201 dma/hisilicon: not in enabled drivers build config 00:02:55.201 dma/idxd: not in enabled drivers build config 00:02:55.201 dma/ioat: not in enabled drivers build config 00:02:55.201 dma/skeleton: not in enabled drivers build config 00:02:55.201 net/af_packet: not in enabled drivers build config 00:02:55.201 net/af_xdp: not in enabled drivers build config 00:02:55.201 net/ark: not in enabled drivers build config 00:02:55.201 net/atlantic: not in enabled drivers build config 00:02:55.201 net/avp: not in enabled drivers build config 00:02:55.201 net/axgbe: not in enabled drivers build config 00:02:55.201 net/bnx2x: not in enabled drivers build config 00:02:55.201 net/bnxt: not in enabled drivers build config 00:02:55.201 net/bonding: not in enabled drivers build config 00:02:55.201 net/cnxk: not in enabled drivers build config 00:02:55.201 net/cpfl: not in enabled drivers build config 00:02:55.201 net/cxgbe: not in enabled drivers build config 00:02:55.201 net/dpaa: not in enabled drivers build config 00:02:55.201 net/dpaa2: not in enabled drivers build config 00:02:55.201 net/e1000: not in enabled drivers build config 00:02:55.201 net/ena: not in enabled drivers build config 00:02:55.201 net/enetc: not in enabled drivers build config 00:02:55.201 net/enetfec: not in enabled drivers build config 00:02:55.201 net/enic: not in enabled drivers build config 00:02:55.201 net/failsafe: not in enabled drivers build config 00:02:55.201 net/fm10k: not in enabled drivers build config 00:02:55.201 net/gve: not in enabled drivers build config 00:02:55.201 net/hinic: not in enabled drivers build config 00:02:55.201 net/hns3: not in enabled drivers build config 00:02:55.201 net/iavf: not in enabled drivers build config 00:02:55.201 net/ice: not in enabled drivers build config 00:02:55.201 net/idpf: not in enabled drivers build config 00:02:55.201 net/igc: not in enabled drivers build config 00:02:55.201 net/ionic: not in enabled drivers build config 00:02:55.201 net/ipn3ke: not in enabled drivers build config 00:02:55.201 net/ixgbe: not in enabled drivers build config 00:02:55.201 net/mana: not in enabled drivers build config 00:02:55.202 net/memif: not in enabled drivers build config 00:02:55.202 net/mlx4: not in enabled drivers build config 00:02:55.202 net/mlx5: not in enabled drivers build config 00:02:55.202 net/mvneta: not in enabled drivers build config 00:02:55.202 net/mvpp2: not in enabled drivers build config 00:02:55.202 net/netvsc: not in enabled drivers build config 00:02:55.202 net/nfb: not in enabled drivers build config 00:02:55.202 net/nfp: not in enabled drivers build config 00:02:55.202 net/ngbe: not in enabled drivers build config 00:02:55.202 net/null: not in enabled drivers build config 00:02:55.202 net/octeontx: not in enabled drivers build config 00:02:55.202 net/octeon_ep: not in enabled drivers build config 00:02:55.202 net/pcap: not in enabled drivers build config 00:02:55.202 net/pfe: not in enabled drivers build config 00:02:55.202 net/qede: not in enabled drivers build config 00:02:55.202 net/ring: not in enabled drivers build config 00:02:55.202 net/sfc: not in enabled drivers build config 00:02:55.202 net/softnic: not in enabled drivers build config 00:02:55.202 net/tap: not in enabled drivers build config 00:02:55.202 net/thunderx: not in enabled drivers build config 00:02:55.202 net/txgbe: not in enabled drivers build config 00:02:55.202 net/vdev_netvsc: not in enabled drivers build config 00:02:55.202 net/vhost: not in enabled drivers build config 00:02:55.202 net/virtio: not in enabled drivers build config 00:02:55.202 net/vmxnet3: not in enabled drivers build config 00:02:55.202 raw/cnxk_bphy: not in enabled drivers build config 00:02:55.202 raw/cnxk_gpio: not in enabled drivers build config 00:02:55.202 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:55.202 raw/ifpga: not in enabled drivers build config 00:02:55.202 raw/ntb: not in enabled drivers build config 00:02:55.202 raw/skeleton: not in enabled drivers build config 00:02:55.202 crypto/armv8: not in enabled drivers build config 00:02:55.202 crypto/bcmfs: not in enabled drivers build config 00:02:55.202 crypto/caam_jr: not in enabled drivers build config 00:02:55.202 crypto/ccp: not in enabled drivers build config 00:02:55.202 crypto/cnxk: not in enabled drivers build config 00:02:55.202 crypto/dpaa_sec: not in enabled drivers build config 00:02:55.202 crypto/dpaa2_sec: not in enabled drivers build config 00:02:55.202 crypto/ipsec_mb: not in enabled drivers build config 00:02:55.202 crypto/mlx5: not in enabled drivers build config 00:02:55.202 crypto/mvsam: not in enabled drivers build config 00:02:55.202 crypto/nitrox: not in enabled drivers build config 00:02:55.202 crypto/null: not in enabled drivers build config 00:02:55.202 crypto/octeontx: not in enabled drivers build config 00:02:55.202 crypto/openssl: not in enabled drivers build config 00:02:55.202 crypto/scheduler: not in enabled drivers build config 00:02:55.202 crypto/uadk: not in enabled drivers build config 00:02:55.202 crypto/virtio: not in enabled drivers build config 00:02:55.202 compress/isal: not in enabled drivers build config 00:02:55.202 compress/mlx5: not in enabled drivers build config 00:02:55.202 compress/octeontx: not in enabled drivers build config 00:02:55.202 compress/zlib: not in enabled drivers build config 00:02:55.202 regex/mlx5: not in enabled drivers build config 00:02:55.202 regex/cn9k: not in enabled drivers build config 00:02:55.202 ml/cnxk: not in enabled drivers build config 00:02:55.202 vdpa/ifc: not in enabled drivers build config 00:02:55.202 vdpa/mlx5: not in enabled drivers build config 00:02:55.202 vdpa/nfp: not in enabled drivers build config 00:02:55.202 vdpa/sfc: not in enabled drivers build config 00:02:55.202 event/cnxk: not in enabled drivers build config 00:02:55.202 event/dlb2: not in enabled drivers build config 00:02:55.202 event/dpaa: not in enabled drivers build config 00:02:55.202 event/dpaa2: not in enabled drivers build config 00:02:55.202 event/dsw: not in enabled drivers build config 00:02:55.202 event/opdl: not in enabled drivers build config 00:02:55.202 event/skeleton: not in enabled drivers build config 00:02:55.202 event/sw: not in enabled drivers build config 00:02:55.202 event/octeontx: not in enabled drivers build config 00:02:55.202 baseband/acc: not in enabled drivers build config 00:02:55.202 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:55.202 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:55.202 baseband/la12xx: not in enabled drivers build config 00:02:55.202 baseband/null: not in enabled drivers build config 00:02:55.202 baseband/turbo_sw: not in enabled drivers build config 00:02:55.202 gpu/cuda: not in enabled drivers build config 00:02:55.202 00:02:55.202 00:02:55.202 Build targets in project: 217 00:02:55.202 00:02:55.202 DPDK 23.11.0 00:02:55.202 00:02:55.202 User defined options 00:02:55.202 libdir : lib 00:02:55.202 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:55.202 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:55.202 c_link_args : 00:02:55.202 enable_docs : false 00:02:55.202 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:55.202 enable_kmods : false 00:02:55.202 machine : native 00:02:55.202 tests : false 00:02:55.202 00:02:55.202 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:55.202 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:55.462 20:15:48 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:55.462 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:55.462 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:55.462 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:55.462 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:55.462 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:55.462 [5/707] Linking static target lib/librte_kvargs.a 00:02:55.462 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:55.462 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:55.721 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:55.721 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:55.721 [10/707] Linking static target lib/librte_log.a 00:02:55.721 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.721 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:55.721 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:55.721 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:55.981 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:55.981 [16/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.981 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:55.981 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:55.981 [19/707] Linking target lib/librte_log.so.24.0 00:02:56.240 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:56.240 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:56.240 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:56.240 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:56.240 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:56.240 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:56.240 [26/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:56.240 [27/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:56.499 [28/707] Linking target lib/librte_kvargs.so.24.0 00:02:56.499 [29/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:56.499 [30/707] Linking static target lib/librte_telemetry.a 00:02:56.499 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:56.499 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:56.499 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:56.499 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:56.499 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:56.499 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:56.759 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:56.759 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:56.759 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:56.759 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:56.759 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:56.759 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.759 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:56.759 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:57.019 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:57.019 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:57.019 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:57.278 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:57.279 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:57.279 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:57.279 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:57.279 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:57.279 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:57.279 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:57.279 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:57.279 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:57.537 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:57.537 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:57.537 [59/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:57.537 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:57.537 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:57.537 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:57.538 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:57.538 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:57.538 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:57.538 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:57.795 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:57.795 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:57.795 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:58.055 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:58.055 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.055 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:58.055 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:58.055 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:58.055 [75/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.055 [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:58.055 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.055 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:58.314 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.314 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:58.314 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:58.314 [82/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.314 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:58.314 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:58.314 [85/707] Linking static target lib/librte_ring.a 00:02:58.572 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:58.572 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:58.572 [88/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.572 [89/707] Linking static target lib/librte_eal.a 00:02:58.572 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:58.830 [91/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:58.830 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:58.830 [93/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:58.830 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:58.830 [95/707] Linking static target lib/librte_mempool.a 00:02:59.088 [96/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.088 [97/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.088 [98/707] Linking static target lib/librte_rcu.a 00:02:59.088 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:59.088 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:59.088 [101/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:59.345 [102/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:59.345 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:59.345 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:59.345 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.346 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.346 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:59.346 [108/707] Linking static target lib/librte_net.a 00:02:59.603 [109/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:59.603 [110/707] Linking static target lib/librte_meter.a 00:02:59.603 [111/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:59.603 [112/707] Linking static target lib/librte_mbuf.a 00:02:59.603 [113/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.603 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:59.860 [115/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.860 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:59.860 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:59.860 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.118 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.118 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.118 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.684 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:00.684 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:00.684 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:00.684 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:00.684 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:00.684 [127/707] Linking static target lib/librte_pci.a 00:03:00.684 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:00.684 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:00.684 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:00.684 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:00.988 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:00.988 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.988 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:00.988 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:00.988 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:00.988 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:00.988 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:00.988 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:00.988 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:00.988 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:00.988 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:00.988 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:00.988 [144/707] Linking static target lib/librte_cmdline.a 00:03:01.256 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.256 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:01.256 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:01.256 [148/707] Linking static target lib/librte_metrics.a 00:03:01.521 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:01.521 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:01.777 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.777 [152/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.777 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:01.777 [154/707] Linking static target lib/librte_timer.a 00:03:01.777 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.343 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.343 [157/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:02.343 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:02.343 [159/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:02.343 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:02.602 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:02.602 [162/707] Linking static target lib/librte_bitratestats.a 00:03:02.602 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:02.861 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.861 [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:02.861 [166/707] Linking static target lib/librte_bbdev.a 00:03:03.119 [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:03.119 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:03.378 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:03.378 [170/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:03.378 [171/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.636 [172/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.636 [173/707] Linking static target lib/librte_hash.a 00:03:03.636 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.636 [175/707] Linking static target lib/librte_ethdev.a 00:03:03.895 [176/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:03.895 [177/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:03.895 [178/707] Linking static target lib/acl/libavx2_tmp.a 00:03:03.895 [179/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.895 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:03.895 [181/707] Linking target lib/librte_eal.so.24.0 00:03:04.154 [182/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:04.154 [183/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.154 [184/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:04.154 [185/707] Linking target lib/librte_ring.so.24.0 00:03:04.154 [186/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:04.154 [187/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:04.154 [188/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:04.154 [189/707] Linking target lib/librte_meter.so.24.0 00:03:04.414 [190/707] Linking target lib/librte_pci.so.24.0 00:03:04.414 [191/707] Linking target lib/librte_mempool.so.24.0 00:03:04.414 [192/707] Linking target lib/librte_rcu.so.24.0 00:03:04.414 [193/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:04.414 [194/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:04.414 [195/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:04.414 [196/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:04.414 [197/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:04.414 [198/707] Linking static target lib/librte_cfgfile.a 00:03:04.414 [199/707] Linking target lib/librte_timer.so.24.0 00:03:04.414 [200/707] Linking target lib/librte_mbuf.so.24.0 00:03:04.414 [201/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.414 [202/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:04.673 [203/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:04.673 [204/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:04.673 [205/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:04.673 [206/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.673 [207/707] Linking target lib/librte_net.so.24.0 00:03:04.673 [208/707] Linking static target lib/librte_bpf.a 00:03:04.673 [209/707] Linking target lib/librte_bbdev.so.24.0 00:03:04.673 [210/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:04.673 [211/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.932 [212/707] Linking target lib/librte_hash.so.24.0 00:03:04.932 [213/707] Linking target lib/librte_cmdline.so.24.0 00:03:04.932 [214/707] Linking target lib/librte_cfgfile.so.24.0 00:03:04.932 [215/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.932 [216/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.932 [217/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:04.932 [218/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.932 [219/707] Linking static target lib/librte_compressdev.a 00:03:04.932 [220/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:04.932 [221/707] Linking static target lib/librte_acl.a 00:03:05.191 [222/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:05.191 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:05.449 [224/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.449 [225/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:05.449 [226/707] Linking target lib/librte_acl.so.24.0 00:03:05.449 [227/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:05.449 [228/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.449 [229/707] Linking target lib/librte_compressdev.so.24.0 00:03:05.449 [230/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:05.449 [231/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:05.449 [232/707] Linking static target lib/librte_distributor.a 00:03:05.706 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:05.706 [234/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.706 [235/707] Linking static target lib/librte_dmadev.a 00:03:05.706 [236/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.706 [237/707] Linking target lib/librte_distributor.so.24.0 00:03:05.966 [238/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.966 [239/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:05.966 [240/707] Linking target lib/librte_dmadev.so.24.0 00:03:06.224 [241/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:06.224 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:06.224 [243/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:06.483 [244/707] Linking static target lib/librte_efd.a 00:03:06.483 [245/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:06.483 [246/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:06.483 [247/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.483 [248/707] Linking target lib/librte_efd.so.24.0 00:03:06.483 [249/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:06.742 [250/707] Linking static target lib/librte_cryptodev.a 00:03:06.742 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:06.742 [252/707] Linking static target lib/librte_dispatcher.a 00:03:06.742 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:07.001 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:07.001 [255/707] Linking static target lib/librte_gpudev.a 00:03:07.001 [256/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:07.001 [257/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.001 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:07.260 [259/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:07.260 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:07.519 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:07.519 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:07.519 [263/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.520 [264/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.520 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:07.780 [266/707] Linking target lib/librte_gpudev.so.24.0 00:03:07.780 [267/707] Linking target lib/librte_cryptodev.so.24.0 00:03:07.780 [268/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:07.780 [269/707] Linking static target lib/librte_gro.a 00:03:07.780 [270/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:07.780 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:07.780 [272/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:07.780 [273/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.780 [274/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.780 [275/707] Linking target lib/librte_ethdev.so.24.0 00:03:07.780 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:07.780 [277/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:08.040 [278/707] Linking static target lib/librte_eventdev.a 00:03:08.040 [279/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:08.040 [280/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:08.040 [281/707] Linking target lib/librte_metrics.so.24.0 00:03:08.040 [282/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:08.040 [283/707] Linking target lib/librte_bpf.so.24.0 00:03:08.040 [284/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:08.040 [285/707] Linking target lib/librte_gro.so.24.0 00:03:08.040 [286/707] Linking static target lib/librte_gso.a 00:03:08.040 [287/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:08.040 [288/707] Linking target lib/librte_bitratestats.so.24.0 00:03:08.040 [289/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:08.040 [290/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:08.299 [291/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.299 [292/707] Linking target lib/librte_gso.so.24.0 00:03:08.299 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:08.299 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:08.299 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:08.299 [296/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:08.558 [297/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:08.558 [298/707] Linking static target lib/librte_jobstats.a 00:03:08.558 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:08.558 [300/707] Linking static target lib/librte_ip_frag.a 00:03:08.558 [301/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:08.558 [302/707] Linking static target lib/librte_latencystats.a 00:03:08.816 [303/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:08.816 [304/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:08.816 [305/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:08.816 [306/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:08.816 [307/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.816 [308/707] Linking target lib/librte_jobstats.so.24.0 00:03:08.816 [309/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:08.816 [310/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.816 [311/707] Linking target lib/librte_latencystats.so.24.0 00:03:08.816 [312/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.081 [313/707] Linking target lib/librte_ip_frag.so.24.0 00:03:09.081 [314/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:09.081 [315/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:09.081 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:09.081 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:09.081 [318/707] Linking static target lib/librte_lpm.a 00:03:09.340 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:09.340 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:09.340 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:09.340 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:09.340 [323/707] Linking static target lib/librte_pcapng.a 00:03:09.601 [324/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:09.601 [325/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.601 [326/707] Linking target lib/librte_lpm.so.24.0 00:03:09.601 [327/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:09.601 [328/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:09.601 [329/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:09.601 [330/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:09.601 [331/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.601 [332/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:09.601 [333/707] Linking target lib/librte_pcapng.so.24.0 00:03:09.861 [334/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.861 [335/707] Linking target lib/librte_eventdev.so.24.0 00:03:09.861 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:09.861 [337/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:09.861 [338/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:09.861 [339/707] Linking target lib/librte_dispatcher.so.24.0 00:03:10.120 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:10.120 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:10.121 [342/707] Linking static target lib/librte_power.a 00:03:10.121 [343/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:10.121 [344/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:10.121 [345/707] Linking static target lib/librte_regexdev.a 00:03:10.121 [346/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:10.121 [347/707] Linking static target lib/librte_rawdev.a 00:03:10.121 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:10.121 [349/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:10.121 [350/707] Linking static target lib/librte_member.a 00:03:10.380 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:10.380 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:10.380 [353/707] Linking static target lib/librte_mldev.a 00:03:10.639 [354/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.640 [355/707] Linking target lib/librte_member.so.24.0 00:03:10.640 [356/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:10.640 [357/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.640 [358/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.640 [359/707] Linking target lib/librte_rawdev.so.24.0 00:03:10.640 [360/707] Linking target lib/librte_power.so.24.0 00:03:10.640 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:10.640 [362/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:10.640 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:10.640 [364/707] Linking static target lib/librte_reorder.a 00:03:10.899 [365/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:10.899 [366/707] Linking static target lib/librte_rib.a 00:03:10.899 [367/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.899 [368/707] Linking target lib/librte_regexdev.so.24.0 00:03:10.899 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:10.899 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:10.899 [371/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.192 [372/707] Linking target lib/librte_reorder.so.24.0 00:03:11.192 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:11.192 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:11.192 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:11.192 [376/707] Linking static target lib/librte_stack.a 00:03:11.192 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:11.192 [378/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.192 [379/707] Linking target lib/librte_rib.so.24.0 00:03:11.192 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.452 [381/707] Linking target lib/librte_stack.so.24.0 00:03:11.452 [382/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:11.452 [383/707] Linking static target lib/librte_security.a 00:03:11.452 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:11.452 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:11.452 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:11.712 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.712 [388/707] Linking target lib/librte_mldev.so.24.0 00:03:11.712 [389/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:11.712 [390/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.712 [391/707] Linking target lib/librte_security.so.24.0 00:03:11.973 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:11.973 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:11.973 [394/707] Linking static target lib/librte_sched.a 00:03:11.973 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:12.233 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:12.233 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.233 [398/707] Linking target lib/librte_sched.so.24.0 00:03:12.233 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:12.233 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:12.492 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:12.492 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:12.492 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:12.752 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:12.752 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:12.752 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:12.752 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:13.012 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:13.012 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:13.012 [410/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:13.012 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:13.272 [412/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:13.272 [413/707] Linking static target lib/librte_ipsec.a 00:03:13.272 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:13.272 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:13.272 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.532 [417/707] Linking target lib/librte_ipsec.so.24.0 00:03:13.532 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:13.532 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:13.532 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:13.790 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:13.791 [422/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:13.791 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:13.791 [424/707] Linking static target lib/librte_fib.a 00:03:14.050 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:14.050 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:14.050 [427/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:14.050 [428/707] Linking static target lib/librte_pdcp.a 00:03:14.309 [429/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.309 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:14.309 [431/707] Linking target lib/librte_fib.so.24.0 00:03:14.309 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:14.569 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.569 [434/707] Linking target lib/librte_pdcp.so.24.0 00:03:14.569 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:14.828 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:14.828 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:14.828 [438/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:14.828 [439/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:14.828 [440/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:15.088 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:15.347 [442/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:15.347 [443/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:15.348 [444/707] Linking static target lib/librte_port.a 00:03:15.348 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:15.348 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:15.348 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:15.348 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:15.607 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:15.607 [450/707] Linking static target lib/librte_pdump.a 00:03:15.607 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:15.607 [452/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.607 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:15.607 [454/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:15.866 [455/707] Linking target lib/librte_port.so.24.0 00:03:15.866 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.867 [457/707] Linking target lib/librte_pdump.so.24.0 00:03:15.867 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:16.125 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:16.125 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:16.125 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:16.125 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:16.125 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:16.125 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:16.693 [465/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:16.693 [466/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:16.693 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:16.693 [468/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:16.693 [469/707] Linking static target lib/librte_table.a 00:03:16.693 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:16.969 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:17.228 [472/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:17.228 [473/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.228 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:17.228 [475/707] Linking target lib/librte_table.so.24.0 00:03:17.228 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:17.487 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:17.487 [478/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:17.487 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:17.487 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:17.746 [481/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:17.746 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:17.746 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:18.006 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:18.006 [485/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:18.006 [486/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:18.006 [487/707] Linking static target lib/librte_graph.a 00:03:18.266 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:18.266 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:18.266 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:18.524 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:18.524 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.524 [493/707] Linking target lib/librte_graph.so.24.0 00:03:18.784 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:18.784 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:18.784 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:18.784 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:19.043 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:19.043 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:19.043 [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:19.043 [501/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:19.043 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:19.303 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:19.303 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:19.303 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:19.303 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:19.562 [507/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:19.562 [508/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:19.562 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:19.562 [510/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:19.562 [511/707] Linking static target lib/librte_node.a 00:03:19.562 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:19.845 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.845 [514/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.845 [515/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.846 [516/707] Linking target lib/librte_node.so.24.0 00:03:19.846 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:19.846 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.113 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.113 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.113 [521/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.113 [522/707] Linking static target drivers/librte_bus_pci.a 00:03:20.113 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.113 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.113 [525/707] Linking static target drivers/librte_bus_vdev.a 00:03:20.113 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:20.372 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.372 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:20.372 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:20.372 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.372 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:20.372 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.372 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:20.372 [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:20.631 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.631 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:20.631 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.631 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.631 [539/707] Linking static target drivers/librte_mempool_ring.a 00:03:20.631 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.631 [541/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:20.631 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:20.631 [543/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:20.891 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:21.149 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:21.408 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:21.408 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:21.667 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:21.926 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:22.185 [550/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:22.185 [551/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:22.185 [552/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:22.185 [553/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:22.444 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:22.445 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:22.445 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:22.704 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:22.704 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:22.963 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:22.963 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:23.223 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:23.223 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:23.223 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:23.483 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:23.483 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:23.743 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:23.743 [567/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:23.743 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:23.743 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:24.001 [570/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:24.001 [571/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:24.001 [572/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:24.001 [573/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:24.259 [574/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:24.259 [575/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:24.518 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:24.518 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:24.518 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:24.775 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:24.775 [580/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:24.775 [581/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:24.775 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:25.034 [583/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:25.034 [584/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.034 [585/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:25.034 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.034 [587/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:25.034 [588/707] Linking static target drivers/librte_net_i40e.a 00:03:25.293 [589/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:25.293 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:25.553 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.553 [592/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:25.553 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:25.815 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:25.815 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:25.815 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:25.815 [597/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:25.816 [598/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:26.079 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:26.337 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:26.337 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:26.337 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:26.337 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:26.594 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:26.594 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:26.594 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:26.594 [607/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:26.594 [608/707] Linking static target lib/librte_vhost.a 00:03:26.594 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:26.852 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:26.852 [611/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:26.852 [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:26.852 [613/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:27.109 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:27.109 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:27.366 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:27.366 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:27.623 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:27.623 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.623 [620/707] Linking target lib/librte_vhost.so.24.0 00:03:28.188 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:28.188 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:28.188 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:28.188 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:28.447 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:28.447 [626/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:28.447 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:28.447 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:28.447 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:28.705 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:28.705 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:28.705 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:28.705 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:28.967 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:28.967 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:28.967 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:28.967 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:29.241 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:29.241 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:29.241 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:29.241 [641/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:29.241 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:29.507 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:29.508 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:29.768 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:29.768 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:29.768 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:30.028 [648/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:30.028 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:30.028 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:30.028 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:30.288 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:30.288 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:30.288 [654/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:30.548 [655/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:30.548 [656/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:30.807 [657/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:30.807 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:30.807 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:31.067 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:31.327 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:31.327 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:31.327 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:31.327 [664/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:31.327 [665/707] Linking static target lib/librte_pipeline.a 00:03:31.586 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:31.586 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:31.844 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:31.844 [669/707] Linking target app/dpdk-dumpcap 00:03:31.844 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:31.844 [671/707] Linking target app/dpdk-graph 00:03:32.105 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:32.105 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:32.105 [674/707] Linking target app/dpdk-pdump 00:03:32.368 [675/707] Linking target app/dpdk-test-acl 00:03:32.368 [676/707] Linking target app/dpdk-proc-info 00:03:32.368 [677/707] Linking target app/dpdk-test-bbdev 00:03:32.634 [678/707] Linking target app/dpdk-test-cmdline 00:03:32.634 [679/707] Linking target app/dpdk-test-crypto-perf 00:03:32.634 [680/707] Linking target app/dpdk-test-compress-perf 00:03:32.900 [681/707] Linking target app/dpdk-test-dma-perf 00:03:32.900 [682/707] Linking target app/dpdk-test-fib 00:03:32.900 [683/707] Linking target app/dpdk-test-eventdev 00:03:32.900 [684/707] Linking target app/dpdk-test-flow-perf 00:03:33.168 [685/707] Linking target app/dpdk-test-gpudev 00:03:33.168 [686/707] Linking target app/dpdk-test-mldev 00:03:33.168 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:33.168 [688/707] Linking target app/dpdk-test-pipeline 00:03:33.168 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:33.431 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:33.431 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:33.690 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:33.690 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:33.690 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:33.949 [695/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:33.949 [696/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:33.949 [697/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:34.209 [698/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:34.209 [699/707] Linking target app/dpdk-test-sad 00:03:34.469 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:34.469 [701/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.469 [702/707] Linking target app/dpdk-test-regex 00:03:34.469 [703/707] Linking target lib/librte_pipeline.so.24.0 00:03:34.730 [704/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:34.730 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:34.990 [706/707] Linking target app/dpdk-test-security-perf 00:03:34.990 [707/707] Linking target app/dpdk-testpmd 00:03:34.990 20:16:28 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:34.990 20:16:28 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:34.990 20:16:28 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:35.250 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:35.250 [0/1] Installing files. 00:03:35.514 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:35.514 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.515 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.516 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.517 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.518 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.518 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.518 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.518 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.518 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.518 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.519 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.783 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.783 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.783 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.783 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.783 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.783 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.783 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.783 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.783 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.783 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.783 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:35.786 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:35.786 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:35.786 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:35.786 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:35.786 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:35.786 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:35.786 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:35.786 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:35.786 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:35.786 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:35.786 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:35.786 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:35.786 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:35.786 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:35.786 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:35.786 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:35.786 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:35.786 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:35.786 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:35.786 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:35.786 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:35.786 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:35.786 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:35.786 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:35.786 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:35.786 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:35.786 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:35.786 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:35.786 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:35.786 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:35.786 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:35.786 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:35.786 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:35.786 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:35.786 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:35.786 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:35.786 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:35.786 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:35.786 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:35.786 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:35.786 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:35.786 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:35.786 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:35.786 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:35.786 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:35.786 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:35.786 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:35.786 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:35.786 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:35.786 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:35.786 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:35.786 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:35.786 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:35.786 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:35.786 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:35.786 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:35.786 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:35.786 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:35.786 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:35.787 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:35.787 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:35.787 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:35.787 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:35.787 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:35.787 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:35.787 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:35.787 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:35.787 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:35.787 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:35.787 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:35.787 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:35.787 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:35.787 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:35.787 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:35.787 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:35.787 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:35.787 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:35.787 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:35.787 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:35.787 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:35.787 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:35.787 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:35.787 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:35.787 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:35.787 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:35.787 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:35.787 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:35.787 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:35.787 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:35.787 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:35.787 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:36.047 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:36.047 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:36.047 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:36.047 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:36.047 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:36.047 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:36.047 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:36.047 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:36.047 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:36.047 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:36.047 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:36.047 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:36.047 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:36.047 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:36.047 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:36.047 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:36.047 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:36.047 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:36.047 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:36.047 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:36.047 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:36.047 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:36.047 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:36.047 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:36.047 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:36.047 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:36.047 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:36.047 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:36.047 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:36.047 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:36.047 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:36.047 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:36.047 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:36.047 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:36.047 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:36.047 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:36.047 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:36.047 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:36.047 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:36.047 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:36.047 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:36.047 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:36.047 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:36.047 20:16:29 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:36.047 ************************************ 00:03:36.047 END TEST build_native_dpdk 00:03:36.047 ************************************ 00:03:36.047 20:16:29 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:36.047 00:03:36.047 real 0m48.230s 00:03:36.047 user 5m31.427s 00:03:36.047 sys 0m55.682s 00:03:36.047 20:16:29 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:36.047 20:16:29 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:36.047 20:16:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:36.047 20:16:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:36.047 20:16:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:36.047 20:16:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:36.047 20:16:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:36.047 20:16:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:36.047 20:16:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:36.047 20:16:29 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:36.047 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:36.308 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.308 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:36.308 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:36.876 Using 'verbs' RDMA provider 00:03:52.715 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:07.610 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:08.179 Creating mk/config.mk...done. 00:04:08.179 Creating mk/cc.flags.mk...done. 00:04:08.439 Type 'make' to build. 00:04:08.439 20:17:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:08.439 20:17:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:08.439 20:17:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:08.439 20:17:01 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.439 ************************************ 00:04:08.439 START TEST make 00:04:08.439 ************************************ 00:04:08.439 20:17:01 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:08.697 make[1]: Nothing to be done for 'all'. 00:05:04.983 CC lib/log/log.o 00:05:04.983 CC lib/ut/ut.o 00:05:04.983 CC lib/log/log_flags.o 00:05:04.983 CC lib/log/log_deprecated.o 00:05:04.983 CC lib/ut_mock/mock.o 00:05:04.983 LIB libspdk_log.a 00:05:04.983 LIB libspdk_ut.a 00:05:04.983 LIB libspdk_ut_mock.a 00:05:04.983 SO libspdk_log.so.7.0 00:05:04.983 SO libspdk_ut.so.2.0 00:05:04.983 SO libspdk_ut_mock.so.6.0 00:05:04.983 SYMLINK libspdk_ut.so 00:05:04.983 SYMLINK libspdk_log.so 00:05:04.983 SYMLINK libspdk_ut_mock.so 00:05:04.983 CC lib/util/base64.o 00:05:04.983 CC lib/dma/dma.o 00:05:04.983 CC lib/util/bit_array.o 00:05:04.983 CC lib/util/cpuset.o 00:05:04.983 CC lib/util/crc16.o 00:05:04.983 CXX lib/trace_parser/trace.o 00:05:04.983 CC lib/util/crc32.o 00:05:04.983 CC lib/util/crc32c.o 00:05:04.983 CC lib/ioat/ioat.o 00:05:04.983 CC lib/vfio_user/host/vfio_user_pci.o 00:05:04.983 CC lib/vfio_user/host/vfio_user.o 00:05:04.983 CC lib/util/crc32_ieee.o 00:05:04.983 CC lib/util/crc64.o 00:05:04.983 CC lib/util/dif.o 00:05:04.983 LIB libspdk_dma.a 00:05:04.983 CC lib/util/fd.o 00:05:04.983 CC lib/util/fd_group.o 00:05:04.983 SO libspdk_dma.so.5.0 00:05:04.983 CC lib/util/file.o 00:05:04.983 CC lib/util/hexlify.o 00:05:04.983 CC lib/util/iov.o 00:05:04.983 SYMLINK libspdk_dma.so 00:05:04.983 CC lib/util/math.o 00:05:04.983 CC lib/util/net.o 00:05:04.983 LIB libspdk_ioat.a 00:05:04.983 SO libspdk_ioat.so.7.0 00:05:04.983 LIB libspdk_vfio_user.a 00:05:04.983 SYMLINK libspdk_ioat.so 00:05:04.983 CC lib/util/pipe.o 00:05:04.983 SO libspdk_vfio_user.so.5.0 00:05:04.983 CC lib/util/strerror_tls.o 00:05:04.983 CC lib/util/string.o 00:05:04.983 CC lib/util/uuid.o 00:05:04.983 CC lib/util/xor.o 00:05:04.983 SYMLINK libspdk_vfio_user.so 00:05:04.983 CC lib/util/zipf.o 00:05:04.983 CC lib/util/md5.o 00:05:04.983 LIB libspdk_util.a 00:05:04.983 SO libspdk_util.so.10.0 00:05:04.983 SYMLINK libspdk_util.so 00:05:04.983 LIB libspdk_trace_parser.a 00:05:04.983 SO libspdk_trace_parser.so.6.0 00:05:04.983 CC lib/json/json_parse.o 00:05:04.983 CC lib/json/json_util.o 00:05:04.983 CC lib/json/json_write.o 00:05:04.983 CC lib/rdma_provider/common.o 00:05:04.983 CC lib/rdma_utils/rdma_utils.o 00:05:04.983 CC lib/vmd/vmd.o 00:05:04.983 CC lib/conf/conf.o 00:05:04.983 CC lib/idxd/idxd.o 00:05:04.983 CC lib/env_dpdk/env.o 00:05:04.983 SYMLINK libspdk_trace_parser.so 00:05:04.983 CC lib/env_dpdk/memory.o 00:05:04.983 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:04.984 CC lib/idxd/idxd_user.o 00:05:04.984 LIB libspdk_conf.a 00:05:04.984 CC lib/vmd/led.o 00:05:04.984 SO libspdk_conf.so.6.0 00:05:04.984 LIB libspdk_json.a 00:05:04.984 SO libspdk_json.so.6.0 00:05:04.984 LIB libspdk_rdma_utils.a 00:05:04.984 SYMLINK libspdk_conf.so 00:05:04.984 SO libspdk_rdma_utils.so.1.0 00:05:04.984 CC lib/env_dpdk/pci.o 00:05:04.984 CC lib/env_dpdk/init.o 00:05:04.984 SYMLINK libspdk_json.so 00:05:04.984 CC lib/idxd/idxd_kernel.o 00:05:04.984 LIB libspdk_rdma_provider.a 00:05:04.984 SYMLINK libspdk_rdma_utils.so 00:05:04.984 CC lib/env_dpdk/threads.o 00:05:04.984 SO libspdk_rdma_provider.so.6.0 00:05:04.984 SYMLINK libspdk_rdma_provider.so 00:05:04.984 LIB libspdk_vmd.a 00:05:04.984 CC lib/env_dpdk/pci_ioat.o 00:05:04.984 CC lib/env_dpdk/pci_virtio.o 00:05:04.984 SO libspdk_vmd.so.6.0 00:05:04.984 CC lib/env_dpdk/pci_vmd.o 00:05:04.984 CC lib/env_dpdk/pci_idxd.o 00:05:04.984 CC lib/jsonrpc/jsonrpc_server.o 00:05:04.984 SYMLINK libspdk_vmd.so 00:05:04.984 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:04.984 CC lib/env_dpdk/pci_event.o 00:05:04.984 CC lib/env_dpdk/sigbus_handler.o 00:05:04.984 CC lib/env_dpdk/pci_dpdk.o 00:05:04.984 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:04.984 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:04.984 CC lib/jsonrpc/jsonrpc_client.o 00:05:04.984 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:04.984 LIB libspdk_idxd.a 00:05:04.984 SO libspdk_idxd.so.12.1 00:05:04.984 LIB libspdk_jsonrpc.a 00:05:04.984 SYMLINK libspdk_idxd.so 00:05:04.984 SO libspdk_jsonrpc.so.6.0 00:05:04.984 SYMLINK libspdk_jsonrpc.so 00:05:04.984 CC lib/rpc/rpc.o 00:05:04.984 LIB libspdk_env_dpdk.a 00:05:04.984 LIB libspdk_rpc.a 00:05:04.984 SO libspdk_rpc.so.6.0 00:05:04.984 SO libspdk_env_dpdk.so.15.0 00:05:04.984 SYMLINK libspdk_rpc.so 00:05:04.984 SYMLINK libspdk_env_dpdk.so 00:05:04.984 CC lib/notify/notify.o 00:05:04.984 CC lib/notify/notify_rpc.o 00:05:04.984 CC lib/keyring/keyring.o 00:05:04.984 CC lib/keyring/keyring_rpc.o 00:05:04.984 CC lib/trace/trace_flags.o 00:05:04.984 CC lib/trace/trace.o 00:05:04.984 CC lib/trace/trace_rpc.o 00:05:04.984 LIB libspdk_notify.a 00:05:04.984 SO libspdk_notify.so.6.0 00:05:04.984 LIB libspdk_keyring.a 00:05:04.984 SYMLINK libspdk_notify.so 00:05:04.984 SO libspdk_keyring.so.2.0 00:05:04.984 LIB libspdk_trace.a 00:05:04.984 SYMLINK libspdk_keyring.so 00:05:04.984 SO libspdk_trace.so.11.0 00:05:04.984 SYMLINK libspdk_trace.so 00:05:04.984 CC lib/thread/thread.o 00:05:04.984 CC lib/sock/sock_rpc.o 00:05:04.984 CC lib/sock/sock.o 00:05:04.984 CC lib/thread/iobuf.o 00:05:04.984 LIB libspdk_sock.a 00:05:04.984 SO libspdk_sock.so.10.0 00:05:04.984 SYMLINK libspdk_sock.so 00:05:04.984 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:04.984 CC lib/nvme/nvme_ns.o 00:05:04.984 CC lib/nvme/nvme_pcie.o 00:05:04.984 CC lib/nvme/nvme_qpair.o 00:05:04.984 CC lib/nvme/nvme_ns_cmd.o 00:05:04.984 CC lib/nvme/nvme_ctrlr.o 00:05:04.984 CC lib/nvme/nvme_fabric.o 00:05:04.984 CC lib/nvme/nvme_pcie_common.o 00:05:04.984 CC lib/nvme/nvme.o 00:05:04.984 CC lib/nvme/nvme_quirks.o 00:05:04.984 CC lib/nvme/nvme_transport.o 00:05:04.984 CC lib/nvme/nvme_discovery.o 00:05:04.984 LIB libspdk_thread.a 00:05:04.984 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:04.984 SO libspdk_thread.so.10.1 00:05:05.245 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:05.245 SYMLINK libspdk_thread.so 00:05:05.245 CC lib/nvme/nvme_tcp.o 00:05:05.502 CC lib/nvme/nvme_opal.o 00:05:05.502 CC lib/nvme/nvme_io_msg.o 00:05:05.502 CC lib/nvme/nvme_poll_group.o 00:05:05.760 CC lib/nvme/nvme_zns.o 00:05:06.018 CC lib/accel/accel.o 00:05:06.018 CC lib/nvme/nvme_stubs.o 00:05:06.277 CC lib/accel/accel_rpc.o 00:05:06.277 CC lib/blob/blobstore.o 00:05:06.277 CC lib/accel/accel_sw.o 00:05:06.277 CC lib/nvme/nvme_auth.o 00:05:06.536 CC lib/nvme/nvme_cuse.o 00:05:06.536 CC lib/nvme/nvme_rdma.o 00:05:06.794 CC lib/blob/request.o 00:05:07.052 CC lib/init/json_config.o 00:05:07.310 CC lib/virtio/virtio.o 00:05:07.310 CC lib/fsdev/fsdev.o 00:05:07.310 CC lib/init/subsystem.o 00:05:07.569 CC lib/fsdev/fsdev_io.o 00:05:07.569 CC lib/fsdev/fsdev_rpc.o 00:05:07.569 CC lib/init/subsystem_rpc.o 00:05:07.569 CC lib/init/rpc.o 00:05:07.569 CC lib/virtio/virtio_vhost_user.o 00:05:07.829 CC lib/virtio/virtio_vfio_user.o 00:05:07.829 CC lib/virtio/virtio_pci.o 00:05:07.829 CC lib/blob/zeroes.o 00:05:07.829 LIB libspdk_init.a 00:05:07.829 CC lib/blob/blob_bs_dev.o 00:05:07.829 SO libspdk_init.so.6.0 00:05:08.087 SYMLINK libspdk_init.so 00:05:08.345 LIB libspdk_accel.a 00:05:08.345 LIB libspdk_fsdev.a 00:05:08.345 SO libspdk_accel.so.16.0 00:05:08.345 LIB libspdk_virtio.a 00:05:08.345 LIB libspdk_nvme.a 00:05:08.345 SO libspdk_fsdev.so.1.0 00:05:08.345 CC lib/event/reactor.o 00:05:08.345 CC lib/event/app.o 00:05:08.345 CC lib/event/log_rpc.o 00:05:08.345 CC lib/event/app_rpc.o 00:05:08.345 CC lib/event/scheduler_static.o 00:05:08.345 SO libspdk_virtio.so.7.0 00:05:08.345 SYMLINK libspdk_fsdev.so 00:05:08.345 SYMLINK libspdk_accel.so 00:05:08.345 SYMLINK libspdk_virtio.so 00:05:08.602 SO libspdk_nvme.so.14.0 00:05:08.602 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:08.602 CC lib/bdev/bdev_rpc.o 00:05:08.602 CC lib/bdev/bdev.o 00:05:08.602 CC lib/bdev/part.o 00:05:08.602 CC lib/bdev/bdev_zone.o 00:05:08.860 CC lib/bdev/scsi_nvme.o 00:05:09.118 SYMLINK libspdk_nvme.so 00:05:09.118 LIB libspdk_event.a 00:05:09.118 SO libspdk_event.so.14.0 00:05:09.118 SYMLINK libspdk_event.so 00:05:09.376 LIB libspdk_fuse_dispatcher.a 00:05:09.376 SO libspdk_fuse_dispatcher.so.1.0 00:05:09.636 SYMLINK libspdk_fuse_dispatcher.so 00:05:11.536 LIB libspdk_blob.a 00:05:11.536 SO libspdk_blob.so.11.0 00:05:11.536 SYMLINK libspdk_blob.so 00:05:11.795 CC lib/lvol/lvol.o 00:05:11.795 CC lib/blobfs/blobfs.o 00:05:11.795 CC lib/blobfs/tree.o 00:05:12.730 LIB libspdk_bdev.a 00:05:12.730 SO libspdk_bdev.so.16.0 00:05:12.730 LIB libspdk_blobfs.a 00:05:12.730 SYMLINK libspdk_bdev.so 00:05:12.989 SO libspdk_blobfs.so.10.0 00:05:12.989 LIB libspdk_lvol.a 00:05:12.989 SYMLINK libspdk_blobfs.so 00:05:12.989 SO libspdk_lvol.so.10.0 00:05:12.989 CC lib/ublk/ublk.o 00:05:12.989 CC lib/ublk/ublk_rpc.o 00:05:12.989 CC lib/nvmf/ctrlr.o 00:05:12.989 CC lib/nvmf/ctrlr_discovery.o 00:05:12.989 CC lib/nvmf/subsystem.o 00:05:12.989 CC lib/nvmf/ctrlr_bdev.o 00:05:12.989 CC lib/scsi/dev.o 00:05:12.989 CC lib/nbd/nbd.o 00:05:12.989 SYMLINK libspdk_lvol.so 00:05:12.989 CC lib/ftl/ftl_core.o 00:05:12.989 CC lib/nvmf/nvmf.o 00:05:13.248 CC lib/ftl/ftl_init.o 00:05:13.506 CC lib/scsi/lun.o 00:05:13.506 CC lib/nbd/nbd_rpc.o 00:05:13.506 CC lib/ftl/ftl_layout.o 00:05:13.765 LIB libspdk_nbd.a 00:05:13.765 CC lib/ftl/ftl_debug.o 00:05:13.765 SO libspdk_nbd.so.7.0 00:05:14.024 SYMLINK libspdk_nbd.so 00:05:14.024 CC lib/ftl/ftl_io.o 00:05:14.024 CC lib/scsi/port.o 00:05:14.024 CC lib/nvmf/nvmf_rpc.o 00:05:14.024 CC lib/ftl/ftl_sb.o 00:05:14.283 CC lib/ftl/ftl_l2p.o 00:05:14.283 CC lib/scsi/scsi.o 00:05:14.283 CC lib/ftl/ftl_l2p_flat.o 00:05:14.541 LIB libspdk_ublk.a 00:05:14.541 CC lib/ftl/ftl_nv_cache.o 00:05:14.541 CC lib/scsi/scsi_bdev.o 00:05:14.541 CC lib/ftl/ftl_band.o 00:05:14.541 SO libspdk_ublk.so.3.0 00:05:14.541 CC lib/nvmf/transport.o 00:05:14.541 SYMLINK libspdk_ublk.so 00:05:14.541 CC lib/ftl/ftl_band_ops.o 00:05:14.541 CC lib/ftl/ftl_writer.o 00:05:14.799 CC lib/nvmf/tcp.o 00:05:15.058 CC lib/nvmf/stubs.o 00:05:15.058 CC lib/ftl/ftl_rq.o 00:05:15.316 CC lib/scsi/scsi_pr.o 00:05:15.316 CC lib/nvmf/mdns_server.o 00:05:15.316 CC lib/nvmf/rdma.o 00:05:15.316 CC lib/nvmf/auth.o 00:05:15.575 CC lib/ftl/ftl_reloc.o 00:05:15.575 CC lib/ftl/ftl_l2p_cache.o 00:05:15.575 CC lib/scsi/scsi_rpc.o 00:05:15.833 CC lib/scsi/task.o 00:05:15.833 CC lib/ftl/ftl_p2l.o 00:05:15.833 CC lib/ftl/ftl_p2l_log.o 00:05:16.092 CC lib/ftl/mngt/ftl_mngt.o 00:05:16.092 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:16.092 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:16.349 LIB libspdk_scsi.a 00:05:16.349 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:16.349 SO libspdk_scsi.so.9.0 00:05:16.349 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:16.349 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:16.349 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:16.349 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:16.349 SYMLINK libspdk_scsi.so 00:05:16.349 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:16.607 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:16.607 CC lib/iscsi/conn.o 00:05:16.607 CC lib/iscsi/init_grp.o 00:05:16.607 CC lib/iscsi/iscsi.o 00:05:16.607 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:16.865 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:16.865 CC lib/vhost/vhost.o 00:05:16.865 CC lib/vhost/vhost_rpc.o 00:05:16.865 CC lib/vhost/vhost_scsi.o 00:05:16.865 CC lib/iscsi/param.o 00:05:16.865 CC lib/iscsi/portal_grp.o 00:05:17.123 CC lib/vhost/vhost_blk.o 00:05:17.123 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:17.400 CC lib/vhost/rte_vhost_user.o 00:05:17.400 CC lib/ftl/utils/ftl_conf.o 00:05:17.400 CC lib/ftl/utils/ftl_md.o 00:05:17.400 CC lib/ftl/utils/ftl_mempool.o 00:05:17.707 CC lib/ftl/utils/ftl_bitmap.o 00:05:17.707 CC lib/ftl/utils/ftl_property.o 00:05:17.707 CC lib/iscsi/tgt_node.o 00:05:17.707 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:17.707 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:17.965 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:17.965 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:17.965 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:17.965 CC lib/iscsi/iscsi_subsystem.o 00:05:18.223 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:18.223 LIB libspdk_nvmf.a 00:05:18.223 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:18.223 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:18.223 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:18.223 SO libspdk_nvmf.so.19.0 00:05:18.483 CC lib/iscsi/iscsi_rpc.o 00:05:18.483 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:18.483 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:18.483 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:18.483 CC lib/iscsi/task.o 00:05:18.483 LIB libspdk_vhost.a 00:05:18.483 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:18.743 SYMLINK libspdk_nvmf.so 00:05:18.743 CC lib/ftl/base/ftl_base_dev.o 00:05:18.743 CC lib/ftl/base/ftl_base_bdev.o 00:05:18.743 CC lib/ftl/ftl_trace.o 00:05:18.743 SO libspdk_vhost.so.8.0 00:05:18.743 SYMLINK libspdk_vhost.so 00:05:19.003 LIB libspdk_iscsi.a 00:05:19.003 LIB libspdk_ftl.a 00:05:19.003 SO libspdk_iscsi.so.8.0 00:05:19.263 SYMLINK libspdk_iscsi.so 00:05:19.263 SO libspdk_ftl.so.9.0 00:05:19.522 SYMLINK libspdk_ftl.so 00:05:20.090 CC module/env_dpdk/env_dpdk_rpc.o 00:05:20.090 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:20.090 CC module/keyring/file/keyring.o 00:05:20.090 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:20.090 CC module/accel/ioat/accel_ioat.o 00:05:20.090 CC module/fsdev/aio/fsdev_aio.o 00:05:20.090 CC module/sock/posix/posix.o 00:05:20.090 CC module/scheduler/gscheduler/gscheduler.o 00:05:20.090 CC module/accel/error/accel_error.o 00:05:20.090 CC module/blob/bdev/blob_bdev.o 00:05:20.090 LIB libspdk_env_dpdk_rpc.a 00:05:20.090 SO libspdk_env_dpdk_rpc.so.6.0 00:05:20.090 SYMLINK libspdk_env_dpdk_rpc.so 00:05:20.090 CC module/accel/error/accel_error_rpc.o 00:05:20.090 CC module/keyring/file/keyring_rpc.o 00:05:20.090 LIB libspdk_scheduler_dpdk_governor.a 00:05:20.090 LIB libspdk_scheduler_gscheduler.a 00:05:20.090 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:20.090 SO libspdk_scheduler_gscheduler.so.4.0 00:05:20.349 LIB libspdk_scheduler_dynamic.a 00:05:20.349 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:20.349 CC module/accel/ioat/accel_ioat_rpc.o 00:05:20.349 SO libspdk_scheduler_dynamic.so.4.0 00:05:20.349 SYMLINK libspdk_scheduler_gscheduler.so 00:05:20.349 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:20.349 LIB libspdk_accel_error.a 00:05:20.349 LIB libspdk_keyring_file.a 00:05:20.349 SYMLINK libspdk_scheduler_dynamic.so 00:05:20.349 SO libspdk_accel_error.so.2.0 00:05:20.349 SO libspdk_keyring_file.so.2.0 00:05:20.349 LIB libspdk_blob_bdev.a 00:05:20.349 CC module/fsdev/aio/linux_aio_mgr.o 00:05:20.349 LIB libspdk_accel_ioat.a 00:05:20.349 SYMLINK libspdk_accel_error.so 00:05:20.349 SO libspdk_blob_bdev.so.11.0 00:05:20.349 SYMLINK libspdk_keyring_file.so 00:05:20.349 SO libspdk_accel_ioat.so.6.0 00:05:20.349 CC module/accel/dsa/accel_dsa.o 00:05:20.349 CC module/accel/dsa/accel_dsa_rpc.o 00:05:20.349 CC module/accel/iaa/accel_iaa.o 00:05:20.349 CC module/accel/iaa/accel_iaa_rpc.o 00:05:20.608 SYMLINK libspdk_blob_bdev.so 00:05:20.608 SYMLINK libspdk_accel_ioat.so 00:05:20.608 CC module/keyring/linux/keyring.o 00:05:20.608 CC module/keyring/linux/keyring_rpc.o 00:05:20.608 LIB libspdk_accel_iaa.a 00:05:20.868 SO libspdk_accel_iaa.so.3.0 00:05:20.868 CC module/bdev/delay/vbdev_delay.o 00:05:20.868 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:20.868 CC module/bdev/gpt/gpt.o 00:05:20.868 LIB libspdk_keyring_linux.a 00:05:20.868 CC module/blobfs/bdev/blobfs_bdev.o 00:05:20.868 CC module/bdev/error/vbdev_error.o 00:05:20.868 LIB libspdk_accel_dsa.a 00:05:20.868 SO libspdk_keyring_linux.so.1.0 00:05:20.868 SYMLINK libspdk_accel_iaa.so 00:05:20.868 LIB libspdk_fsdev_aio.a 00:05:20.868 SO libspdk_accel_dsa.so.5.0 00:05:20.868 CC module/bdev/gpt/vbdev_gpt.o 00:05:20.868 SO libspdk_fsdev_aio.so.1.0 00:05:20.868 SYMLINK libspdk_keyring_linux.so 00:05:20.868 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:20.868 SYMLINK libspdk_accel_dsa.so 00:05:20.868 CC module/bdev/error/vbdev_error_rpc.o 00:05:20.868 SYMLINK libspdk_fsdev_aio.so 00:05:20.868 LIB libspdk_sock_posix.a 00:05:21.129 SO libspdk_sock_posix.so.6.0 00:05:21.129 LIB libspdk_blobfs_bdev.a 00:05:21.129 LIB libspdk_bdev_error.a 00:05:21.129 SYMLINK libspdk_sock_posix.so 00:05:21.129 CC module/bdev/lvol/vbdev_lvol.o 00:05:21.129 SO libspdk_blobfs_bdev.so.6.0 00:05:21.129 SO libspdk_bdev_error.so.6.0 00:05:21.129 CC module/bdev/malloc/bdev_malloc.o 00:05:21.129 CC module/bdev/nvme/bdev_nvme.o 00:05:21.129 CC module/bdev/null/bdev_null.o 00:05:21.129 LIB libspdk_bdev_delay.a 00:05:21.129 SYMLINK libspdk_blobfs_bdev.so 00:05:21.129 SYMLINK libspdk_bdev_error.so 00:05:21.129 SO libspdk_bdev_delay.so.6.0 00:05:21.129 LIB libspdk_bdev_gpt.a 00:05:21.388 CC module/bdev/passthru/vbdev_passthru.o 00:05:21.388 SO libspdk_bdev_gpt.so.6.0 00:05:21.388 CC module/bdev/raid/bdev_raid.o 00:05:21.388 SYMLINK libspdk_bdev_delay.so 00:05:21.388 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:21.388 SYMLINK libspdk_bdev_gpt.so 00:05:21.388 CC module/bdev/nvme/nvme_rpc.o 00:05:21.388 CC module/bdev/split/vbdev_split.o 00:05:21.388 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:21.388 CC module/bdev/null/bdev_null_rpc.o 00:05:21.648 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:21.648 CC module/bdev/split/vbdev_split_rpc.o 00:05:21.648 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:21.648 CC module/bdev/nvme/bdev_mdns_client.o 00:05:21.648 LIB libspdk_bdev_null.a 00:05:21.648 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:21.648 SO libspdk_bdev_null.so.6.0 00:05:21.648 LIB libspdk_bdev_malloc.a 00:05:21.907 SO libspdk_bdev_malloc.so.6.0 00:05:21.907 LIB libspdk_bdev_split.a 00:05:21.907 SYMLINK libspdk_bdev_null.so 00:05:21.907 LIB libspdk_bdev_passthru.a 00:05:21.907 CC module/bdev/nvme/vbdev_opal.o 00:05:21.907 SO libspdk_bdev_split.so.6.0 00:05:21.907 SO libspdk_bdev_passthru.so.6.0 00:05:21.907 SYMLINK libspdk_bdev_malloc.so 00:05:21.907 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:21.907 SYMLINK libspdk_bdev_split.so 00:05:21.907 SYMLINK libspdk_bdev_passthru.so 00:05:22.166 CC module/bdev/ftl/bdev_ftl.o 00:05:22.166 CC module/bdev/aio/bdev_aio.o 00:05:22.166 CC module/bdev/iscsi/bdev_iscsi.o 00:05:22.166 LIB libspdk_bdev_zone_block.a 00:05:22.166 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:22.166 SO libspdk_bdev_zone_block.so.6.0 00:05:22.166 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:22.166 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:22.166 LIB libspdk_bdev_lvol.a 00:05:22.166 SYMLINK libspdk_bdev_zone_block.so 00:05:22.166 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:22.427 SO libspdk_bdev_lvol.so.6.0 00:05:22.427 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:22.427 SYMLINK libspdk_bdev_lvol.so 00:05:22.428 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:22.428 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:22.428 LIB libspdk_bdev_ftl.a 00:05:22.428 CC module/bdev/aio/bdev_aio_rpc.o 00:05:22.428 SO libspdk_bdev_ftl.so.6.0 00:05:22.428 CC module/bdev/raid/bdev_raid_rpc.o 00:05:22.686 SYMLINK libspdk_bdev_ftl.so 00:05:22.686 CC module/bdev/raid/bdev_raid_sb.o 00:05:22.686 LIB libspdk_bdev_iscsi.a 00:05:22.686 CC module/bdev/raid/raid0.o 00:05:22.686 CC module/bdev/raid/raid1.o 00:05:22.686 SO libspdk_bdev_iscsi.so.6.0 00:05:22.686 CC module/bdev/raid/concat.o 00:05:22.686 LIB libspdk_bdev_aio.a 00:05:22.686 SO libspdk_bdev_aio.so.6.0 00:05:22.686 SYMLINK libspdk_bdev_iscsi.so 00:05:22.686 CC module/bdev/raid/raid5f.o 00:05:22.686 SYMLINK libspdk_bdev_aio.so 00:05:22.686 LIB libspdk_bdev_virtio.a 00:05:22.942 SO libspdk_bdev_virtio.so.6.0 00:05:22.942 SYMLINK libspdk_bdev_virtio.so 00:05:23.200 LIB libspdk_bdev_raid.a 00:05:23.458 SO libspdk_bdev_raid.so.6.0 00:05:23.458 SYMLINK libspdk_bdev_raid.so 00:05:24.026 LIB libspdk_bdev_nvme.a 00:05:24.026 SO libspdk_bdev_nvme.so.7.0 00:05:24.286 SYMLINK libspdk_bdev_nvme.so 00:05:24.854 CC module/event/subsystems/sock/sock.o 00:05:24.854 CC module/event/subsystems/iobuf/iobuf.o 00:05:24.854 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:24.854 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:24.854 CC module/event/subsystems/keyring/keyring.o 00:05:24.854 CC module/event/subsystems/scheduler/scheduler.o 00:05:24.854 CC module/event/subsystems/vmd/vmd.o 00:05:24.854 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:24.854 CC module/event/subsystems/fsdev/fsdev.o 00:05:24.854 LIB libspdk_event_sock.a 00:05:24.854 LIB libspdk_event_keyring.a 00:05:25.112 LIB libspdk_event_scheduler.a 00:05:25.112 SO libspdk_event_keyring.so.1.0 00:05:25.112 SO libspdk_event_sock.so.5.0 00:05:25.112 LIB libspdk_event_vhost_blk.a 00:05:25.112 LIB libspdk_event_fsdev.a 00:05:25.112 LIB libspdk_event_iobuf.a 00:05:25.112 SO libspdk_event_scheduler.so.4.0 00:05:25.112 SO libspdk_event_vhost_blk.so.3.0 00:05:25.112 LIB libspdk_event_vmd.a 00:05:25.112 SO libspdk_event_iobuf.so.3.0 00:05:25.112 SYMLINK libspdk_event_keyring.so 00:05:25.112 SYMLINK libspdk_event_sock.so 00:05:25.112 SO libspdk_event_fsdev.so.1.0 00:05:25.112 SYMLINK libspdk_event_scheduler.so 00:05:25.112 SO libspdk_event_vmd.so.6.0 00:05:25.112 SYMLINK libspdk_event_fsdev.so 00:05:25.112 SYMLINK libspdk_event_iobuf.so 00:05:25.112 SYMLINK libspdk_event_vhost_blk.so 00:05:25.112 SYMLINK libspdk_event_vmd.so 00:05:25.371 CC module/event/subsystems/accel/accel.o 00:05:25.630 LIB libspdk_event_accel.a 00:05:25.630 SO libspdk_event_accel.so.6.0 00:05:25.887 SYMLINK libspdk_event_accel.so 00:05:26.146 CC module/event/subsystems/bdev/bdev.o 00:05:26.405 LIB libspdk_event_bdev.a 00:05:26.405 SO libspdk_event_bdev.so.6.0 00:05:26.405 SYMLINK libspdk_event_bdev.so 00:05:26.973 CC module/event/subsystems/ublk/ublk.o 00:05:26.973 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:26.973 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:26.973 CC module/event/subsystems/nbd/nbd.o 00:05:26.973 CC module/event/subsystems/scsi/scsi.o 00:05:26.973 LIB libspdk_event_ublk.a 00:05:26.973 SO libspdk_event_ublk.so.3.0 00:05:26.973 LIB libspdk_event_nbd.a 00:05:26.973 LIB libspdk_event_scsi.a 00:05:26.973 SYMLINK libspdk_event_ublk.so 00:05:26.973 SO libspdk_event_scsi.so.6.0 00:05:26.973 SO libspdk_event_nbd.so.6.0 00:05:27.231 LIB libspdk_event_nvmf.a 00:05:27.231 SYMLINK libspdk_event_scsi.so 00:05:27.231 SYMLINK libspdk_event_nbd.so 00:05:27.231 SO libspdk_event_nvmf.so.6.0 00:05:27.231 SYMLINK libspdk_event_nvmf.so 00:05:27.489 CC module/event/subsystems/iscsi/iscsi.o 00:05:27.489 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:27.489 LIB libspdk_event_iscsi.a 00:05:27.747 LIB libspdk_event_vhost_scsi.a 00:05:27.747 SO libspdk_event_iscsi.so.6.0 00:05:27.747 SO libspdk_event_vhost_scsi.so.3.0 00:05:27.747 SYMLINK libspdk_event_iscsi.so 00:05:27.747 SYMLINK libspdk_event_vhost_scsi.so 00:05:28.009 SO libspdk.so.6.0 00:05:28.009 SYMLINK libspdk.so 00:05:28.287 CC app/trace_record/trace_record.o 00:05:28.287 CXX app/trace/trace.o 00:05:28.287 CC app/spdk_lspci/spdk_lspci.o 00:05:28.287 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:28.287 CC app/nvmf_tgt/nvmf_main.o 00:05:28.287 CC app/iscsi_tgt/iscsi_tgt.o 00:05:28.287 CC app/spdk_tgt/spdk_tgt.o 00:05:28.287 CC examples/ioat/perf/perf.o 00:05:28.287 CC examples/util/zipf/zipf.o 00:05:28.287 CC test/thread/poller_perf/poller_perf.o 00:05:28.287 LINK spdk_lspci 00:05:28.546 LINK nvmf_tgt 00:05:28.546 LINK zipf 00:05:28.546 LINK interrupt_tgt 00:05:28.546 LINK spdk_tgt 00:05:28.546 LINK iscsi_tgt 00:05:28.546 LINK spdk_trace_record 00:05:28.546 LINK poller_perf 00:05:28.546 LINK ioat_perf 00:05:28.546 LINK spdk_trace 00:05:28.804 CC examples/ioat/verify/verify.o 00:05:28.804 CC app/spdk_nvme_perf/perf.o 00:05:28.804 CC app/spdk_nvme_identify/identify.o 00:05:28.804 CC examples/sock/hello_world/hello_sock.o 00:05:28.804 TEST_HEADER include/spdk/accel.h 00:05:28.804 TEST_HEADER include/spdk/accel_module.h 00:05:28.804 TEST_HEADER include/spdk/assert.h 00:05:28.804 TEST_HEADER include/spdk/barrier.h 00:05:28.804 TEST_HEADER include/spdk/base64.h 00:05:28.804 TEST_HEADER include/spdk/bdev.h 00:05:28.804 TEST_HEADER include/spdk/bdev_module.h 00:05:28.804 TEST_HEADER include/spdk/bdev_zone.h 00:05:28.804 TEST_HEADER include/spdk/bit_array.h 00:05:28.804 TEST_HEADER include/spdk/bit_pool.h 00:05:28.804 CC examples/thread/thread/thread_ex.o 00:05:28.804 TEST_HEADER include/spdk/blob_bdev.h 00:05:28.804 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:28.804 TEST_HEADER include/spdk/blobfs.h 00:05:28.804 TEST_HEADER include/spdk/blob.h 00:05:28.804 TEST_HEADER include/spdk/conf.h 00:05:28.804 CC examples/vmd/lsvmd/lsvmd.o 00:05:28.804 TEST_HEADER include/spdk/config.h 00:05:28.804 TEST_HEADER include/spdk/cpuset.h 00:05:28.804 TEST_HEADER include/spdk/crc16.h 00:05:28.804 TEST_HEADER include/spdk/crc32.h 00:05:28.804 TEST_HEADER include/spdk/crc64.h 00:05:28.804 TEST_HEADER include/spdk/dif.h 00:05:28.804 TEST_HEADER include/spdk/dma.h 00:05:28.804 TEST_HEADER include/spdk/endian.h 00:05:28.804 TEST_HEADER include/spdk/env_dpdk.h 00:05:29.064 TEST_HEADER include/spdk/env.h 00:05:29.064 TEST_HEADER include/spdk/event.h 00:05:29.064 TEST_HEADER include/spdk/fd_group.h 00:05:29.064 TEST_HEADER include/spdk/fd.h 00:05:29.064 TEST_HEADER include/spdk/file.h 00:05:29.064 TEST_HEADER include/spdk/fsdev.h 00:05:29.064 TEST_HEADER include/spdk/fsdev_module.h 00:05:29.064 LINK verify 00:05:29.064 TEST_HEADER include/spdk/ftl.h 00:05:29.064 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:29.064 TEST_HEADER include/spdk/gpt_spec.h 00:05:29.064 TEST_HEADER include/spdk/hexlify.h 00:05:29.064 TEST_HEADER include/spdk/histogram_data.h 00:05:29.064 TEST_HEADER include/spdk/idxd.h 00:05:29.064 TEST_HEADER include/spdk/idxd_spec.h 00:05:29.064 TEST_HEADER include/spdk/init.h 00:05:29.064 TEST_HEADER include/spdk/ioat.h 00:05:29.064 TEST_HEADER include/spdk/ioat_spec.h 00:05:29.064 TEST_HEADER include/spdk/iscsi_spec.h 00:05:29.064 TEST_HEADER include/spdk/json.h 00:05:29.064 CC test/dma/test_dma/test_dma.o 00:05:29.064 CC test/app/bdev_svc/bdev_svc.o 00:05:29.064 TEST_HEADER include/spdk/jsonrpc.h 00:05:29.064 TEST_HEADER include/spdk/keyring.h 00:05:29.064 TEST_HEADER include/spdk/keyring_module.h 00:05:29.064 TEST_HEADER include/spdk/likely.h 00:05:29.064 TEST_HEADER include/spdk/log.h 00:05:29.064 TEST_HEADER include/spdk/lvol.h 00:05:29.064 TEST_HEADER include/spdk/md5.h 00:05:29.064 TEST_HEADER include/spdk/memory.h 00:05:29.064 TEST_HEADER include/spdk/mmio.h 00:05:29.064 TEST_HEADER include/spdk/nbd.h 00:05:29.064 TEST_HEADER include/spdk/net.h 00:05:29.064 TEST_HEADER include/spdk/notify.h 00:05:29.064 TEST_HEADER include/spdk/nvme.h 00:05:29.064 TEST_HEADER include/spdk/nvme_intel.h 00:05:29.064 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:29.064 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:29.064 TEST_HEADER include/spdk/nvme_spec.h 00:05:29.064 TEST_HEADER include/spdk/nvme_zns.h 00:05:29.064 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:29.064 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:29.064 TEST_HEADER include/spdk/nvmf.h 00:05:29.064 TEST_HEADER include/spdk/nvmf_spec.h 00:05:29.064 TEST_HEADER include/spdk/nvmf_transport.h 00:05:29.064 TEST_HEADER include/spdk/opal.h 00:05:29.064 TEST_HEADER include/spdk/opal_spec.h 00:05:29.064 TEST_HEADER include/spdk/pci_ids.h 00:05:29.064 TEST_HEADER include/spdk/pipe.h 00:05:29.064 TEST_HEADER include/spdk/queue.h 00:05:29.064 TEST_HEADER include/spdk/reduce.h 00:05:29.064 TEST_HEADER include/spdk/rpc.h 00:05:29.064 TEST_HEADER include/spdk/scheduler.h 00:05:29.064 TEST_HEADER include/spdk/scsi.h 00:05:29.064 TEST_HEADER include/spdk/scsi_spec.h 00:05:29.064 TEST_HEADER include/spdk/sock.h 00:05:29.064 TEST_HEADER include/spdk/stdinc.h 00:05:29.064 TEST_HEADER include/spdk/string.h 00:05:29.064 TEST_HEADER include/spdk/thread.h 00:05:29.064 TEST_HEADER include/spdk/trace.h 00:05:29.064 TEST_HEADER include/spdk/trace_parser.h 00:05:29.064 TEST_HEADER include/spdk/tree.h 00:05:29.064 TEST_HEADER include/spdk/ublk.h 00:05:29.064 TEST_HEADER include/spdk/util.h 00:05:29.064 TEST_HEADER include/spdk/uuid.h 00:05:29.064 TEST_HEADER include/spdk/version.h 00:05:29.064 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:29.064 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:29.064 LINK lsvmd 00:05:29.064 TEST_HEADER include/spdk/vhost.h 00:05:29.064 TEST_HEADER include/spdk/vmd.h 00:05:29.064 TEST_HEADER include/spdk/xor.h 00:05:29.064 TEST_HEADER include/spdk/zipf.h 00:05:29.064 CXX test/cpp_headers/accel.o 00:05:29.064 CC test/env/mem_callbacks/mem_callbacks.o 00:05:29.064 LINK bdev_svc 00:05:29.323 LINK hello_sock 00:05:29.323 LINK thread 00:05:29.323 CC test/env/vtophys/vtophys.o 00:05:29.323 CXX test/cpp_headers/accel_module.o 00:05:29.323 LINK vtophys 00:05:29.323 CC examples/vmd/led/led.o 00:05:29.582 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:29.582 CXX test/cpp_headers/assert.o 00:05:29.582 LINK test_dma 00:05:29.582 LINK led 00:05:29.582 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:29.582 CC examples/idxd/perf/perf.o 00:05:29.582 CC test/env/memory/memory_ut.o 00:05:29.582 LINK mem_callbacks 00:05:29.582 LINK env_dpdk_post_init 00:05:29.841 CXX test/cpp_headers/barrier.o 00:05:29.841 LINK spdk_nvme_perf 00:05:29.841 LINK spdk_nvme_identify 00:05:29.841 CXX test/cpp_headers/base64.o 00:05:29.841 CC test/app/histogram_perf/histogram_perf.o 00:05:29.841 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:30.100 CC examples/accel/perf/accel_perf.o 00:05:30.100 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:30.100 LINK idxd_perf 00:05:30.100 CXX test/cpp_headers/bdev.o 00:05:30.100 LINK histogram_perf 00:05:30.100 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:30.100 LINK nvme_fuzz 00:05:30.100 CC app/spdk_nvme_discover/discovery_aer.o 00:05:30.360 CXX test/cpp_headers/bdev_module.o 00:05:30.360 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:30.360 CC test/app/jsoncat/jsoncat.o 00:05:30.360 LINK hello_fsdev 00:05:30.360 LINK spdk_nvme_discover 00:05:30.360 CC examples/blob/hello_world/hello_blob.o 00:05:30.360 LINK jsoncat 00:05:30.360 CXX test/cpp_headers/bdev_zone.o 00:05:30.360 CC test/event/event_perf/event_perf.o 00:05:30.619 LINK accel_perf 00:05:30.619 LINK event_perf 00:05:30.619 CXX test/cpp_headers/bit_array.o 00:05:30.619 CC test/event/reactor/reactor.o 00:05:30.619 LINK hello_blob 00:05:30.619 CC app/spdk_top/spdk_top.o 00:05:30.619 CC test/event/reactor_perf/reactor_perf.o 00:05:30.878 LINK vhost_fuzz 00:05:30.878 LINK reactor 00:05:30.878 CXX test/cpp_headers/bit_pool.o 00:05:30.878 LINK reactor_perf 00:05:30.878 CC test/event/app_repeat/app_repeat.o 00:05:30.878 LINK memory_ut 00:05:30.878 CC examples/nvme/hello_world/hello_world.o 00:05:31.137 CXX test/cpp_headers/blob_bdev.o 00:05:31.137 CC examples/blob/cli/blobcli.o 00:05:31.137 CC examples/nvme/reconnect/reconnect.o 00:05:31.137 LINK app_repeat 00:05:31.137 CC test/event/scheduler/scheduler.o 00:05:31.137 CC test/app/stub/stub.o 00:05:31.137 CXX test/cpp_headers/blobfs_bdev.o 00:05:31.137 LINK hello_world 00:05:31.137 CC test/env/pci/pci_ut.o 00:05:31.395 CXX test/cpp_headers/blobfs.o 00:05:31.395 LINK stub 00:05:31.395 LINK scheduler 00:05:31.395 CXX test/cpp_headers/blob.o 00:05:31.395 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:31.395 LINK reconnect 00:05:31.395 CXX test/cpp_headers/conf.o 00:05:31.395 CC examples/nvme/arbitration/arbitration.o 00:05:31.653 CXX test/cpp_headers/config.o 00:05:31.653 LINK blobcli 00:05:31.653 CXX test/cpp_headers/cpuset.o 00:05:31.653 LINK pci_ut 00:05:31.653 CXX test/cpp_headers/crc16.o 00:05:31.653 CC app/vhost/vhost.o 00:05:31.911 CC app/spdk_dd/spdk_dd.o 00:05:31.911 LINK spdk_top 00:05:31.911 CXX test/cpp_headers/crc32.o 00:05:31.911 CC app/fio/nvme/fio_plugin.o 00:05:31.911 LINK arbitration 00:05:31.911 CXX test/cpp_headers/crc64.o 00:05:31.911 LINK vhost 00:05:31.911 LINK iscsi_fuzz 00:05:31.911 CC app/fio/bdev/fio_plugin.o 00:05:32.168 CXX test/cpp_headers/dif.o 00:05:32.168 LINK nvme_manage 00:05:32.168 CC test/nvme/aer/aer.o 00:05:32.168 CC examples/bdev/hello_world/hello_bdev.o 00:05:32.168 LINK spdk_dd 00:05:32.168 CC test/rpc_client/rpc_client_test.o 00:05:32.168 CXX test/cpp_headers/dma.o 00:05:32.168 CXX test/cpp_headers/endian.o 00:05:32.426 CC examples/nvme/hotplug/hotplug.o 00:05:32.426 CC test/accel/dif/dif.o 00:05:32.426 LINK hello_bdev 00:05:32.426 LINK rpc_client_test 00:05:32.426 CXX test/cpp_headers/env_dpdk.o 00:05:32.426 LINK aer 00:05:32.426 LINK spdk_nvme 00:05:32.426 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:32.426 CC examples/nvme/abort/abort.o 00:05:32.426 CXX test/cpp_headers/env.o 00:05:32.690 LINK spdk_bdev 00:05:32.690 LINK hotplug 00:05:32.690 CC test/nvme/reset/reset.o 00:05:32.690 CC test/nvme/sgl/sgl.o 00:05:32.690 LINK cmb_copy 00:05:32.690 CC examples/bdev/bdevperf/bdevperf.o 00:05:32.690 CXX test/cpp_headers/event.o 00:05:32.690 CC test/nvme/e2edp/nvme_dp.o 00:05:32.690 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:32.948 CXX test/cpp_headers/fd_group.o 00:05:32.948 LINK abort 00:05:32.948 LINK sgl 00:05:32.948 LINK reset 00:05:32.948 CC test/nvme/overhead/overhead.o 00:05:32.948 CXX test/cpp_headers/fd.o 00:05:32.948 CC test/blobfs/mkfs/mkfs.o 00:05:32.948 LINK pmr_persistence 00:05:32.948 LINK nvme_dp 00:05:32.948 CXX test/cpp_headers/file.o 00:05:33.207 LINK dif 00:05:33.208 CC test/nvme/err_injection/err_injection.o 00:05:33.208 CC test/nvme/startup/startup.o 00:05:33.208 CXX test/cpp_headers/fsdev.o 00:05:33.208 LINK mkfs 00:05:33.208 CXX test/cpp_headers/fsdev_module.o 00:05:33.208 LINK overhead 00:05:33.208 CC test/nvme/reserve/reserve.o 00:05:33.467 CXX test/cpp_headers/ftl.o 00:05:33.467 LINK err_injection 00:05:33.467 LINK startup 00:05:33.467 CC test/lvol/esnap/esnap.o 00:05:33.467 CXX test/cpp_headers/fuse_dispatcher.o 00:05:33.467 CC test/nvme/simple_copy/simple_copy.o 00:05:33.467 CC test/nvme/connect_stress/connect_stress.o 00:05:33.467 LINK reserve 00:05:33.467 CXX test/cpp_headers/gpt_spec.o 00:05:33.467 CC test/bdev/bdevio/bdevio.o 00:05:33.726 CC test/nvme/boot_partition/boot_partition.o 00:05:33.726 CC test/nvme/fused_ordering/fused_ordering.o 00:05:33.726 CC test/nvme/compliance/nvme_compliance.o 00:05:33.726 LINK bdevperf 00:05:33.726 CXX test/cpp_headers/hexlify.o 00:05:33.726 LINK connect_stress 00:05:33.726 LINK simple_copy 00:05:33.726 LINK boot_partition 00:05:33.726 LINK fused_ordering 00:05:33.986 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:33.986 CXX test/cpp_headers/histogram_data.o 00:05:33.986 CXX test/cpp_headers/idxd.o 00:05:33.986 CXX test/cpp_headers/idxd_spec.o 00:05:33.986 LINK bdevio 00:05:33.986 CC test/nvme/fdp/fdp.o 00:05:33.986 LINK nvme_compliance 00:05:33.986 CC test/nvme/cuse/cuse.o 00:05:33.986 CC examples/nvmf/nvmf/nvmf.o 00:05:33.986 LINK doorbell_aers 00:05:33.986 CXX test/cpp_headers/init.o 00:05:34.246 CXX test/cpp_headers/ioat.o 00:05:34.246 CXX test/cpp_headers/ioat_spec.o 00:05:34.246 CXX test/cpp_headers/iscsi_spec.o 00:05:34.246 CXX test/cpp_headers/json.o 00:05:34.246 CXX test/cpp_headers/jsonrpc.o 00:05:34.246 CXX test/cpp_headers/keyring.o 00:05:34.246 CXX test/cpp_headers/keyring_module.o 00:05:34.246 CXX test/cpp_headers/likely.o 00:05:34.246 CXX test/cpp_headers/log.o 00:05:34.541 LINK nvmf 00:05:34.541 CXX test/cpp_headers/lvol.o 00:05:34.541 LINK fdp 00:05:34.541 CXX test/cpp_headers/md5.o 00:05:34.541 CXX test/cpp_headers/memory.o 00:05:34.541 CXX test/cpp_headers/mmio.o 00:05:34.541 CXX test/cpp_headers/nbd.o 00:05:34.541 CXX test/cpp_headers/net.o 00:05:34.541 CXX test/cpp_headers/notify.o 00:05:34.541 CXX test/cpp_headers/nvme.o 00:05:34.541 CXX test/cpp_headers/nvme_intel.o 00:05:34.541 CXX test/cpp_headers/nvme_ocssd.o 00:05:34.541 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:34.541 CXX test/cpp_headers/nvme_spec.o 00:05:34.810 CXX test/cpp_headers/nvme_zns.o 00:05:34.810 CXX test/cpp_headers/nvmf_cmd.o 00:05:34.810 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:34.810 CXX test/cpp_headers/nvmf.o 00:05:34.810 CXX test/cpp_headers/nvmf_spec.o 00:05:34.810 CXX test/cpp_headers/nvmf_transport.o 00:05:34.810 CXX test/cpp_headers/opal.o 00:05:34.810 CXX test/cpp_headers/opal_spec.o 00:05:34.810 CXX test/cpp_headers/pci_ids.o 00:05:34.810 CXX test/cpp_headers/pipe.o 00:05:34.810 CXX test/cpp_headers/queue.o 00:05:34.810 CXX test/cpp_headers/reduce.o 00:05:35.070 CXX test/cpp_headers/rpc.o 00:05:35.070 CXX test/cpp_headers/scheduler.o 00:05:35.070 CXX test/cpp_headers/scsi.o 00:05:35.070 CXX test/cpp_headers/scsi_spec.o 00:05:35.070 CXX test/cpp_headers/sock.o 00:05:35.070 CXX test/cpp_headers/stdinc.o 00:05:35.070 CXX test/cpp_headers/string.o 00:05:35.070 CXX test/cpp_headers/thread.o 00:05:35.070 CXX test/cpp_headers/trace.o 00:05:35.070 CXX test/cpp_headers/trace_parser.o 00:05:35.070 CXX test/cpp_headers/tree.o 00:05:35.070 CXX test/cpp_headers/ublk.o 00:05:35.070 CXX test/cpp_headers/util.o 00:05:35.329 CXX test/cpp_headers/uuid.o 00:05:35.329 CXX test/cpp_headers/version.o 00:05:35.329 CXX test/cpp_headers/vfio_user_pci.o 00:05:35.329 CXX test/cpp_headers/vfio_user_spec.o 00:05:35.329 CXX test/cpp_headers/vhost.o 00:05:35.329 CXX test/cpp_headers/vmd.o 00:05:35.329 CXX test/cpp_headers/xor.o 00:05:35.329 CXX test/cpp_headers/zipf.o 00:05:35.329 LINK cuse 00:05:39.520 LINK esnap 00:05:39.520 00:05:39.520 real 1m30.954s 00:05:39.520 user 7m16.974s 00:05:39.520 sys 1m14.774s 00:05:39.520 20:18:32 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:39.520 20:18:32 make -- common/autotest_common.sh@10 -- $ set +x 00:05:39.520 ************************************ 00:05:39.520 END TEST make 00:05:39.520 ************************************ 00:05:39.520 20:18:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:39.520 20:18:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:39.520 20:18:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:39.520 20:18:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.520 20:18:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:39.520 20:18:32 -- pm/common@44 -- $ pid=6197 00:05:39.520 20:18:32 -- pm/common@50 -- $ kill -TERM 6197 00:05:39.520 20:18:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.520 20:18:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:39.520 20:18:32 -- pm/common@44 -- $ pid=6199 00:05:39.520 20:18:32 -- pm/common@50 -- $ kill -TERM 6199 00:05:39.520 20:18:32 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.520 20:18:32 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.520 20:18:32 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.520 20:18:32 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.520 20:18:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.520 20:18:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.520 20:18:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.520 20:18:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.520 20:18:32 -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.520 20:18:32 -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.520 20:18:32 -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.520 20:18:32 -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.520 20:18:32 -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.520 20:18:32 -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.520 20:18:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.520 20:18:32 -- scripts/common.sh@344 -- # case "$op" in 00:05:39.520 20:18:32 -- scripts/common.sh@345 -- # : 1 00:05:39.520 20:18:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.520 20:18:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.520 20:18:32 -- scripts/common.sh@365 -- # decimal 1 00:05:39.520 20:18:32 -- scripts/common.sh@353 -- # local d=1 00:05:39.520 20:18:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.520 20:18:32 -- scripts/common.sh@355 -- # echo 1 00:05:39.520 20:18:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.520 20:18:32 -- scripts/common.sh@366 -- # decimal 2 00:05:39.520 20:18:32 -- scripts/common.sh@353 -- # local d=2 00:05:39.520 20:18:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.520 20:18:32 -- scripts/common.sh@355 -- # echo 2 00:05:39.520 20:18:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.520 20:18:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.520 20:18:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.520 20:18:32 -- scripts/common.sh@368 -- # return 0 00:05:39.520 20:18:32 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.520 20:18:32 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.520 --rc genhtml_branch_coverage=1 00:05:39.520 --rc genhtml_function_coverage=1 00:05:39.520 --rc genhtml_legend=1 00:05:39.520 --rc geninfo_all_blocks=1 00:05:39.520 --rc geninfo_unexecuted_blocks=1 00:05:39.520 00:05:39.520 ' 00:05:39.520 20:18:32 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.520 --rc genhtml_branch_coverage=1 00:05:39.520 --rc genhtml_function_coverage=1 00:05:39.520 --rc genhtml_legend=1 00:05:39.520 --rc geninfo_all_blocks=1 00:05:39.520 --rc geninfo_unexecuted_blocks=1 00:05:39.520 00:05:39.520 ' 00:05:39.520 20:18:32 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.520 --rc genhtml_branch_coverage=1 00:05:39.520 --rc genhtml_function_coverage=1 00:05:39.520 --rc genhtml_legend=1 00:05:39.520 --rc geninfo_all_blocks=1 00:05:39.520 --rc geninfo_unexecuted_blocks=1 00:05:39.520 00:05:39.520 ' 00:05:39.520 20:18:32 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.520 --rc genhtml_branch_coverage=1 00:05:39.520 --rc genhtml_function_coverage=1 00:05:39.520 --rc genhtml_legend=1 00:05:39.520 --rc geninfo_all_blocks=1 00:05:39.520 --rc geninfo_unexecuted_blocks=1 00:05:39.520 00:05:39.520 ' 00:05:39.520 20:18:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:39.520 20:18:32 -- nvmf/common.sh@7 -- # uname -s 00:05:39.520 20:18:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.520 20:18:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.520 20:18:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.520 20:18:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.520 20:18:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.520 20:18:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.520 20:18:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.520 20:18:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.520 20:18:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.520 20:18:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.520 20:18:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d1ebabbf-9595-44ff-861d-4578eb160443 00:05:39.520 20:18:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=d1ebabbf-9595-44ff-861d-4578eb160443 00:05:39.520 20:18:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.520 20:18:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.520 20:18:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.520 20:18:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.520 20:18:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.520 20:18:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.520 20:18:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.520 20:18:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.520 20:18:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.520 20:18:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.520 20:18:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.520 20:18:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.520 20:18:33 -- paths/export.sh@5 -- # export PATH 00:05:39.520 20:18:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.520 20:18:33 -- nvmf/common.sh@51 -- # : 0 00:05:39.520 20:18:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.520 20:18:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.520 20:18:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.520 20:18:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.520 20:18:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.520 20:18:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.520 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.520 20:18:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.520 20:18:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.520 20:18:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.520 20:18:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:39.520 20:18:33 -- spdk/autotest.sh@32 -- # uname -s 00:05:39.520 20:18:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:39.520 20:18:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:39.520 20:18:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:39.520 20:18:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:39.520 20:18:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:39.520 20:18:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:39.520 20:18:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:39.520 20:18:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:39.520 20:18:33 -- spdk/autotest.sh@48 -- # udevadm_pid=66985 00:05:39.520 20:18:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:39.520 20:18:33 -- pm/common@17 -- # local monitor 00:05:39.520 20:18:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.520 20:18:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.779 20:18:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:39.779 20:18:33 -- pm/common@21 -- # date +%s 00:05:39.779 20:18:33 -- pm/common@25 -- # sleep 1 00:05:39.779 20:18:33 -- pm/common@21 -- # date +%s 00:05:39.779 20:18:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732652313 00:05:39.779 20:18:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732652313 00:05:39.779 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732652313_collect-cpu-load.pm.log 00:05:39.779 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732652313_collect-vmstat.pm.log 00:05:40.721 20:18:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:40.721 20:18:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:40.721 20:18:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.721 20:18:34 -- common/autotest_common.sh@10 -- # set +x 00:05:40.721 20:18:34 -- spdk/autotest.sh@59 -- # create_test_list 00:05:40.721 20:18:34 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:40.721 20:18:34 -- common/autotest_common.sh@10 -- # set +x 00:05:40.721 20:18:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:40.721 20:18:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:40.721 20:18:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:40.721 20:18:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:40.721 20:18:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:40.721 20:18:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:40.721 20:18:34 -- common/autotest_common.sh@1455 -- # uname 00:05:40.721 20:18:34 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:40.721 20:18:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:40.721 20:18:34 -- common/autotest_common.sh@1475 -- # uname 00:05:40.721 20:18:34 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:40.721 20:18:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:40.721 20:18:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:40.721 lcov: LCOV version 1.15 00:05:40.721 20:18:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:55.624 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:55.624 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:10.560 20:19:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:10.560 20:19:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.560 20:19:03 -- common/autotest_common.sh@10 -- # set +x 00:06:10.560 20:19:03 -- spdk/autotest.sh@78 -- # rm -f 00:06:10.560 20:19:03 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:10.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.078 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:11.078 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:11.078 20:19:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:11.078 20:19:04 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:11.078 20:19:04 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:11.078 20:19:04 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:11.078 20:19:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:11.078 20:19:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:11.078 20:19:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:11.078 20:19:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:11.078 20:19:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:11.078 20:19:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:11.078 20:19:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:11.078 20:19:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:11.078 20:19:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:11.078 20:19:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:11.078 20:19:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:11.078 20:19:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:11.078 20:19:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:11.078 20:19:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:11.078 20:19:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:11.078 20:19:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:11.078 20:19:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:11.078 20:19:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:11.078 20:19:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:11.078 20:19:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:11.078 20:19:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:11.078 20:19:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.078 20:19:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:11.078 20:19:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:11.078 20:19:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:11.078 20:19:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:11.078 No valid GPT data, bailing 00:06:11.078 20:19:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:11.078 20:19:04 -- scripts/common.sh@394 -- # pt= 00:06:11.078 20:19:04 -- scripts/common.sh@395 -- # return 1 00:06:11.078 20:19:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:11.078 1+0 records in 00:06:11.078 1+0 records out 00:06:11.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581339 s, 180 MB/s 00:06:11.078 20:19:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.078 20:19:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:11.078 20:19:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:11.078 20:19:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:11.078 20:19:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:11.338 No valid GPT data, bailing 00:06:11.338 20:19:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:11.338 20:19:04 -- scripts/common.sh@394 -- # pt= 00:06:11.338 20:19:04 -- scripts/common.sh@395 -- # return 1 00:06:11.338 20:19:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:11.338 1+0 records in 00:06:11.338 1+0 records out 00:06:11.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00670166 s, 156 MB/s 00:06:11.338 20:19:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.338 20:19:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:11.338 20:19:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:11.338 20:19:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:11.338 20:19:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:11.338 No valid GPT data, bailing 00:06:11.339 20:19:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:11.339 20:19:04 -- scripts/common.sh@394 -- # pt= 00:06:11.339 20:19:04 -- scripts/common.sh@395 -- # return 1 00:06:11.339 20:19:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:11.339 1+0 records in 00:06:11.339 1+0 records out 00:06:11.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665964 s, 157 MB/s 00:06:11.339 20:19:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.339 20:19:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:11.339 20:19:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:11.339 20:19:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:11.339 20:19:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:11.339 No valid GPT data, bailing 00:06:11.339 20:19:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:11.339 20:19:04 -- scripts/common.sh@394 -- # pt= 00:06:11.339 20:19:04 -- scripts/common.sh@395 -- # return 1 00:06:11.339 20:19:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:11.339 1+0 records in 00:06:11.339 1+0 records out 00:06:11.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068536 s, 153 MB/s 00:06:11.339 20:19:04 -- spdk/autotest.sh@105 -- # sync 00:06:11.599 20:19:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:11.599 20:19:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:11.599 20:19:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:14.148 20:19:07 -- spdk/autotest.sh@111 -- # uname -s 00:06:14.148 20:19:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:14.148 20:19:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:14.148 20:19:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:14.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.714 Hugepages 00:06:14.714 node hugesize free / total 00:06:14.714 node0 1048576kB 0 / 0 00:06:14.714 node0 2048kB 0 / 0 00:06:14.714 00:06:14.714 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:14.714 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:14.973 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:14.973 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:14.973 20:19:08 -- spdk/autotest.sh@117 -- # uname -s 00:06:14.973 20:19:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:14.973 20:19:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:14.973 20:19:08 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:15.908 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.908 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.165 20:19:09 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:17.100 20:19:10 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:17.100 20:19:10 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:17.100 20:19:10 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:17.100 20:19:10 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:17.100 20:19:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:17.100 20:19:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:17.100 20:19:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.100 20:19:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:17.100 20:19:10 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.100 20:19:10 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:17.100 20:19:10 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:17.100 20:19:10 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:17.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.668 Waiting for block devices as requested 00:06:17.668 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.668 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.928 20:19:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:17.928 20:19:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:17.929 20:19:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:17.929 20:19:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:17.929 20:19:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:17.929 20:19:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:17.929 20:19:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:17.929 20:19:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:17.929 20:19:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:17.929 20:19:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:17.929 20:19:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:17.929 20:19:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:17.929 20:19:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:17.929 20:19:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:17.929 20:19:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:17.929 20:19:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:17.929 20:19:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:17.929 20:19:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:17.929 20:19:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:17.929 20:19:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:17.929 20:19:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:17.929 20:19:11 -- common/autotest_common.sh@1541 -- # continue 00:06:17.929 20:19:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:17.929 20:19:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:17.929 20:19:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:17.929 20:19:11 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:17.929 20:19:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:17.929 20:19:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:17.929 20:19:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:17.929 20:19:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:17.929 20:19:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:17.929 20:19:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:17.929 20:19:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:17.929 20:19:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:17.929 20:19:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:17.929 20:19:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:17.929 20:19:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:17.929 20:19:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:17.929 20:19:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:17.929 20:19:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:17.929 20:19:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:17.929 20:19:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:17.929 20:19:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:17.929 20:19:11 -- common/autotest_common.sh@1541 -- # continue 00:06:17.929 20:19:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:17.929 20:19:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.929 20:19:11 -- common/autotest_common.sh@10 -- # set +x 00:06:17.929 20:19:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:17.929 20:19:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.929 20:19:11 -- common/autotest_common.sh@10 -- # set +x 00:06:17.929 20:19:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:18.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.868 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.868 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.868 20:19:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:18.868 20:19:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.868 20:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.128 20:19:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:19.128 20:19:12 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:19.128 20:19:12 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:19.128 20:19:12 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:19.128 20:19:12 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:19.128 20:19:12 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:19.128 20:19:12 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:19.128 20:19:12 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:19.128 20:19:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:19.128 20:19:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:19.128 20:19:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:19.128 20:19:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:19.128 20:19:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:19.128 20:19:12 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:19.128 20:19:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:19.128 20:19:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:19.128 20:19:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:19.128 20:19:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:19.128 20:19:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:19.128 20:19:12 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:19.128 20:19:12 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:19.128 20:19:12 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:19.128 20:19:12 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:19.128 20:19:12 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:19.128 20:19:12 -- common/autotest_common.sh@1570 -- # return 0 00:06:19.128 20:19:12 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:19.128 20:19:12 -- common/autotest_common.sh@1578 -- # return 0 00:06:19.128 20:19:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:19.128 20:19:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:19.128 20:19:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:19.128 20:19:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:19.128 20:19:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:19.128 20:19:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.128 20:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.128 20:19:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:19.128 20:19:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:19.128 20:19:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.128 20:19:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.128 20:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:19.128 ************************************ 00:06:19.128 START TEST env 00:06:19.128 ************************************ 00:06:19.128 20:19:12 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:19.388 * Looking for test storage... 00:06:19.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.388 20:19:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.388 20:19:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.388 20:19:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.388 20:19:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.388 20:19:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.388 20:19:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.388 20:19:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.388 20:19:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.388 20:19:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.388 20:19:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.388 20:19:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.388 20:19:12 env -- scripts/common.sh@344 -- # case "$op" in 00:06:19.388 20:19:12 env -- scripts/common.sh@345 -- # : 1 00:06:19.388 20:19:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.388 20:19:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.388 20:19:12 env -- scripts/common.sh@365 -- # decimal 1 00:06:19.388 20:19:12 env -- scripts/common.sh@353 -- # local d=1 00:06:19.388 20:19:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.388 20:19:12 env -- scripts/common.sh@355 -- # echo 1 00:06:19.388 20:19:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.388 20:19:12 env -- scripts/common.sh@366 -- # decimal 2 00:06:19.388 20:19:12 env -- scripts/common.sh@353 -- # local d=2 00:06:19.388 20:19:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.388 20:19:12 env -- scripts/common.sh@355 -- # echo 2 00:06:19.388 20:19:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.388 20:19:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.388 20:19:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.388 20:19:12 env -- scripts/common.sh@368 -- # return 0 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.388 --rc genhtml_branch_coverage=1 00:06:19.388 --rc genhtml_function_coverage=1 00:06:19.388 --rc genhtml_legend=1 00:06:19.388 --rc geninfo_all_blocks=1 00:06:19.388 --rc geninfo_unexecuted_blocks=1 00:06:19.388 00:06:19.388 ' 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.388 --rc genhtml_branch_coverage=1 00:06:19.388 --rc genhtml_function_coverage=1 00:06:19.388 --rc genhtml_legend=1 00:06:19.388 --rc geninfo_all_blocks=1 00:06:19.388 --rc geninfo_unexecuted_blocks=1 00:06:19.388 00:06:19.388 ' 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.388 --rc genhtml_branch_coverage=1 00:06:19.388 --rc genhtml_function_coverage=1 00:06:19.388 --rc genhtml_legend=1 00:06:19.388 --rc geninfo_all_blocks=1 00:06:19.388 --rc geninfo_unexecuted_blocks=1 00:06:19.388 00:06:19.388 ' 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.388 --rc genhtml_branch_coverage=1 00:06:19.388 --rc genhtml_function_coverage=1 00:06:19.388 --rc genhtml_legend=1 00:06:19.388 --rc geninfo_all_blocks=1 00:06:19.388 --rc geninfo_unexecuted_blocks=1 00:06:19.388 00:06:19.388 ' 00:06:19.388 20:19:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.388 20:19:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.388 20:19:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.388 ************************************ 00:06:19.388 START TEST env_memory 00:06:19.388 ************************************ 00:06:19.388 20:19:12 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:19.388 00:06:19.388 00:06:19.388 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.388 http://cunit.sourceforge.net/ 00:06:19.388 00:06:19.388 00:06:19.388 Suite: memory 00:06:19.388 Test: alloc and free memory map ...[2024-11-26 20:19:12.923937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:19.647 passed 00:06:19.647 Test: mem map translation ...[2024-11-26 20:19:12.970507] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:19.647 [2024-11-26 20:19:12.970689] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:19.647 [2024-11-26 20:19:12.970817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:19.648 [2024-11-26 20:19:12.970899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:19.648 passed 00:06:19.648 Test: mem map registration ...[2024-11-26 20:19:13.048521] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:19.648 [2024-11-26 20:19:13.048673] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:19.648 passed 00:06:19.648 Test: mem map adjacent registrations ...passed 00:06:19.648 00:06:19.648 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.648 suites 1 1 n/a 0 0 00:06:19.648 tests 4 4 4 0 0 00:06:19.648 asserts 152 152 152 0 n/a 00:06:19.648 00:06:19.648 Elapsed time = 0.258 seconds 00:06:19.648 00:06:19.648 real 0m0.312s 00:06:19.648 user 0m0.270s 00:06:19.648 sys 0m0.031s 00:06:19.648 20:19:13 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.648 20:19:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:19.648 ************************************ 00:06:19.648 END TEST env_memory 00:06:19.648 ************************************ 00:06:19.907 20:19:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:19.907 20:19:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.907 20:19:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.907 20:19:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.907 ************************************ 00:06:19.907 START TEST env_vtophys 00:06:19.907 ************************************ 00:06:19.907 20:19:13 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:19.907 EAL: lib.eal log level changed from notice to debug 00:06:19.907 EAL: Detected lcore 0 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 1 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 2 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 3 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 4 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 5 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 6 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 7 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 8 as core 0 on socket 0 00:06:19.907 EAL: Detected lcore 9 as core 0 on socket 0 00:06:19.907 EAL: Maximum logical cores by configuration: 128 00:06:19.907 EAL: Detected CPU lcores: 10 00:06:19.907 EAL: Detected NUMA nodes: 1 00:06:19.907 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:19.907 EAL: Detected shared linkage of DPDK 00:06:19.907 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:19.907 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:19.907 EAL: Registered [vdev] bus. 00:06:19.907 EAL: bus.vdev log level changed from disabled to notice 00:06:19.907 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:19.907 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:19.907 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:19.907 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:19.907 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:19.907 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:19.907 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:19.907 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:19.907 EAL: No shared files mode enabled, IPC will be disabled 00:06:19.907 EAL: No shared files mode enabled, IPC is disabled 00:06:19.907 EAL: Selected IOVA mode 'PA' 00:06:19.907 EAL: Probing VFIO support... 00:06:19.907 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:19.907 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:19.907 EAL: Ask a virtual area of 0x2e000 bytes 00:06:19.907 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:19.907 EAL: Setting up physically contiguous memory... 00:06:19.907 EAL: Setting maximum number of open files to 524288 00:06:19.907 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:19.907 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:19.907 EAL: Ask a virtual area of 0x61000 bytes 00:06:19.907 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:19.907 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:19.907 EAL: Ask a virtual area of 0x400000000 bytes 00:06:19.907 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:19.907 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:19.907 EAL: Ask a virtual area of 0x61000 bytes 00:06:19.907 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:19.907 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:19.907 EAL: Ask a virtual area of 0x400000000 bytes 00:06:19.907 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:19.907 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:19.907 EAL: Ask a virtual area of 0x61000 bytes 00:06:19.907 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:19.907 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:19.907 EAL: Ask a virtual area of 0x400000000 bytes 00:06:19.907 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:19.907 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:19.907 EAL: Ask a virtual area of 0x61000 bytes 00:06:19.907 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:19.907 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:19.907 EAL: Ask a virtual area of 0x400000000 bytes 00:06:19.908 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:19.908 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:19.908 EAL: Hugepages will be freed exactly as allocated. 00:06:19.908 EAL: No shared files mode enabled, IPC is disabled 00:06:19.908 EAL: No shared files mode enabled, IPC is disabled 00:06:19.908 EAL: TSC frequency is ~2290000 KHz 00:06:19.908 EAL: Main lcore 0 is ready (tid=7fe788e19a40;cpuset=[0]) 00:06:19.908 EAL: Trying to obtain current memory policy. 00:06:19.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.908 EAL: Restoring previous memory policy: 0 00:06:19.908 EAL: request: mp_malloc_sync 00:06:19.908 EAL: No shared files mode enabled, IPC is disabled 00:06:19.908 EAL: Heap on socket 0 was expanded by 2MB 00:06:19.908 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:19.908 EAL: No shared files mode enabled, IPC is disabled 00:06:19.908 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:19.908 EAL: Mem event callback 'spdk:(nil)' registered 00:06:19.908 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:19.908 00:06:19.908 00:06:19.908 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.908 http://cunit.sourceforge.net/ 00:06:19.908 00:06:19.908 00:06:19.908 Suite: components_suite 00:06:20.483 Test: vtophys_malloc_test ...passed 00:06:20.483 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:20.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.483 EAL: Restoring previous memory policy: 4 00:06:20.483 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.483 EAL: request: mp_malloc_sync 00:06:20.483 EAL: No shared files mode enabled, IPC is disabled 00:06:20.483 EAL: Heap on socket 0 was expanded by 4MB 00:06:20.483 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.483 EAL: request: mp_malloc_sync 00:06:20.483 EAL: No shared files mode enabled, IPC is disabled 00:06:20.483 EAL: Heap on socket 0 was shrunk by 4MB 00:06:20.483 EAL: Trying to obtain current memory policy. 00:06:20.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.483 EAL: Restoring previous memory policy: 4 00:06:20.483 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.483 EAL: request: mp_malloc_sync 00:06:20.483 EAL: No shared files mode enabled, IPC is disabled 00:06:20.483 EAL: Heap on socket 0 was expanded by 6MB 00:06:20.483 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.483 EAL: request: mp_malloc_sync 00:06:20.483 EAL: No shared files mode enabled, IPC is disabled 00:06:20.483 EAL: Heap on socket 0 was shrunk by 6MB 00:06:20.483 EAL: Trying to obtain current memory policy. 00:06:20.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.483 EAL: Restoring previous memory policy: 4 00:06:20.483 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.483 EAL: request: mp_malloc_sync 00:06:20.483 EAL: No shared files mode enabled, IPC is disabled 00:06:20.483 EAL: Heap on socket 0 was expanded by 10MB 00:06:20.483 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.483 EAL: request: mp_malloc_sync 00:06:20.483 EAL: No shared files mode enabled, IPC is disabled 00:06:20.483 EAL: Heap on socket 0 was shrunk by 10MB 00:06:20.483 EAL: Trying to obtain current memory policy. 00:06:20.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.483 EAL: Restoring previous memory policy: 4 00:06:20.483 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.483 EAL: request: mp_malloc_sync 00:06:20.483 EAL: No shared files mode enabled, IPC is disabled 00:06:20.483 EAL: Heap on socket 0 was expanded by 18MB 00:06:20.483 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.484 EAL: request: mp_malloc_sync 00:06:20.484 EAL: No shared files mode enabled, IPC is disabled 00:06:20.484 EAL: Heap on socket 0 was shrunk by 18MB 00:06:20.484 EAL: Trying to obtain current memory policy. 00:06:20.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.484 EAL: Restoring previous memory policy: 4 00:06:20.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.484 EAL: request: mp_malloc_sync 00:06:20.484 EAL: No shared files mode enabled, IPC is disabled 00:06:20.484 EAL: Heap on socket 0 was expanded by 34MB 00:06:20.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.484 EAL: request: mp_malloc_sync 00:06:20.484 EAL: No shared files mode enabled, IPC is disabled 00:06:20.484 EAL: Heap on socket 0 was shrunk by 34MB 00:06:20.484 EAL: Trying to obtain current memory policy. 00:06:20.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.484 EAL: Restoring previous memory policy: 4 00:06:20.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.484 EAL: request: mp_malloc_sync 00:06:20.484 EAL: No shared files mode enabled, IPC is disabled 00:06:20.484 EAL: Heap on socket 0 was expanded by 66MB 00:06:20.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.484 EAL: request: mp_malloc_sync 00:06:20.484 EAL: No shared files mode enabled, IPC is disabled 00:06:20.484 EAL: Heap on socket 0 was shrunk by 66MB 00:06:20.484 EAL: Trying to obtain current memory policy. 00:06:20.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.484 EAL: Restoring previous memory policy: 4 00:06:20.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.484 EAL: request: mp_malloc_sync 00:06:20.484 EAL: No shared files mode enabled, IPC is disabled 00:06:20.484 EAL: Heap on socket 0 was expanded by 130MB 00:06:20.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.484 EAL: request: mp_malloc_sync 00:06:20.484 EAL: No shared files mode enabled, IPC is disabled 00:06:20.484 EAL: Heap on socket 0 was shrunk by 130MB 00:06:20.484 EAL: Trying to obtain current memory policy. 00:06:20.484 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.484 EAL: Restoring previous memory policy: 4 00:06:20.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.484 EAL: request: mp_malloc_sync 00:06:20.484 EAL: No shared files mode enabled, IPC is disabled 00:06:20.484 EAL: Heap on socket 0 was expanded by 258MB 00:06:20.484 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.755 EAL: request: mp_malloc_sync 00:06:20.755 EAL: No shared files mode enabled, IPC is disabled 00:06:20.755 EAL: Heap on socket 0 was shrunk by 258MB 00:06:20.755 EAL: Trying to obtain current memory policy. 00:06:20.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.755 EAL: Restoring previous memory policy: 4 00:06:20.755 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.755 EAL: request: mp_malloc_sync 00:06:20.755 EAL: No shared files mode enabled, IPC is disabled 00:06:20.755 EAL: Heap on socket 0 was expanded by 514MB 00:06:20.755 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.015 EAL: request: mp_malloc_sync 00:06:21.015 EAL: No shared files mode enabled, IPC is disabled 00:06:21.015 EAL: Heap on socket 0 was shrunk by 514MB 00:06:21.015 EAL: Trying to obtain current memory policy. 00:06:21.015 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:21.275 EAL: Restoring previous memory policy: 4 00:06:21.275 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.275 EAL: request: mp_malloc_sync 00:06:21.275 EAL: No shared files mode enabled, IPC is disabled 00:06:21.275 EAL: Heap on socket 0 was expanded by 1026MB 00:06:21.535 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.795 passed 00:06:21.795 00:06:21.795 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.795 suites 1 1 n/a 0 0 00:06:21.795 tests 2 2 2 0 0 00:06:21.795 asserts 5323 5323 5323 0 n/a 00:06:21.795 00:06:21.795 Elapsed time = 1.832 seconds 00:06:21.795 EAL: request: mp_malloc_sync 00:06:21.795 EAL: No shared files mode enabled, IPC is disabled 00:06:21.795 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:21.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.795 EAL: request: mp_malloc_sync 00:06:21.795 EAL: No shared files mode enabled, IPC is disabled 00:06:21.795 EAL: Heap on socket 0 was shrunk by 2MB 00:06:21.795 EAL: No shared files mode enabled, IPC is disabled 00:06:21.795 EAL: No shared files mode enabled, IPC is disabled 00:06:21.795 EAL: No shared files mode enabled, IPC is disabled 00:06:21.795 00:06:21.795 real 0m2.095s 00:06:21.795 user 0m1.030s 00:06:21.795 sys 0m0.921s 00:06:21.795 ************************************ 00:06:21.795 END TEST env_vtophys 00:06:21.795 ************************************ 00:06:21.795 20:19:15 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.795 20:19:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:22.054 20:19:15 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:22.054 20:19:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.054 20:19:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.054 20:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.054 ************************************ 00:06:22.054 START TEST env_pci 00:06:22.054 ************************************ 00:06:22.054 20:19:15 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:22.054 00:06:22.054 00:06:22.054 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.054 http://cunit.sourceforge.net/ 00:06:22.054 00:06:22.054 00:06:22.054 Suite: pci 00:06:22.054 Test: pci_hook ...[2024-11-26 20:19:15.422681] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69253 has claimed it 00:06:22.054 passed 00:06:22.054 00:06:22.054 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.054 suites 1 1 n/a 0 0 00:06:22.054 tests 1 1 1 0 0 00:06:22.055 asserts 25 25 25 0 n/a 00:06:22.055 00:06:22.055 Elapsed time = 0.006 seconds 00:06:22.055 EAL: Cannot find device (10000:00:01.0) 00:06:22.055 EAL: Failed to attach device on primary process 00:06:22.055 00:06:22.055 real 0m0.094s 00:06:22.055 user 0m0.043s 00:06:22.055 sys 0m0.050s 00:06:22.055 20:19:15 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.055 ************************************ 00:06:22.055 END TEST env_pci 00:06:22.055 ************************************ 00:06:22.055 20:19:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:22.055 20:19:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:22.055 20:19:15 env -- env/env.sh@15 -- # uname 00:06:22.055 20:19:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:22.055 20:19:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:22.055 20:19:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:22.055 20:19:15 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:22.055 20:19:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.055 20:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.055 ************************************ 00:06:22.055 START TEST env_dpdk_post_init 00:06:22.055 ************************************ 00:06:22.055 20:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:22.313 EAL: Detected CPU lcores: 10 00:06:22.313 EAL: Detected NUMA nodes: 1 00:06:22.313 EAL: Detected shared linkage of DPDK 00:06:22.313 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:22.313 EAL: Selected IOVA mode 'PA' 00:06:22.313 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:22.313 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:22.313 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:22.313 Starting DPDK initialization... 00:06:22.313 Starting SPDK post initialization... 00:06:22.313 SPDK NVMe probe 00:06:22.313 Attaching to 0000:00:10.0 00:06:22.313 Attaching to 0000:00:11.0 00:06:22.313 Attached to 0000:00:10.0 00:06:22.313 Attached to 0000:00:11.0 00:06:22.313 Cleaning up... 00:06:22.313 00:06:22.313 real 0m0.255s 00:06:22.313 user 0m0.071s 00:06:22.313 sys 0m0.085s 00:06:22.313 20:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.313 20:19:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.313 ************************************ 00:06:22.313 END TEST env_dpdk_post_init 00:06:22.313 ************************************ 00:06:22.572 20:19:15 env -- env/env.sh@26 -- # uname 00:06:22.572 20:19:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:22.572 20:19:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.572 20:19:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.572 20:19:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.572 20:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.572 ************************************ 00:06:22.572 START TEST env_mem_callbacks 00:06:22.572 ************************************ 00:06:22.572 20:19:15 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.572 EAL: Detected CPU lcores: 10 00:06:22.572 EAL: Detected NUMA nodes: 1 00:06:22.572 EAL: Detected shared linkage of DPDK 00:06:22.572 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:22.572 EAL: Selected IOVA mode 'PA' 00:06:22.572 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:22.572 00:06:22.572 00:06:22.572 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.572 http://cunit.sourceforge.net/ 00:06:22.572 00:06:22.572 00:06:22.572 Suite: memory 00:06:22.572 Test: test ... 00:06:22.572 register 0x200000200000 2097152 00:06:22.572 malloc 3145728 00:06:22.572 register 0x200000400000 4194304 00:06:22.572 buf 0x200000500000 len 3145728 PASSED 00:06:22.572 malloc 64 00:06:22.572 buf 0x2000004fff40 len 64 PASSED 00:06:22.572 malloc 4194304 00:06:22.572 register 0x200000800000 6291456 00:06:22.572 buf 0x200000a00000 len 4194304 PASSED 00:06:22.572 free 0x200000500000 3145728 00:06:22.572 free 0x2000004fff40 64 00:06:22.572 unregister 0x200000400000 4194304 PASSED 00:06:22.572 free 0x200000a00000 4194304 00:06:22.572 unregister 0x200000800000 6291456 PASSED 00:06:22.572 malloc 8388608 00:06:22.572 register 0x200000400000 10485760 00:06:22.572 buf 0x200000600000 len 8388608 PASSED 00:06:22.572 free 0x200000600000 8388608 00:06:22.572 unregister 0x200000400000 10485760 PASSED 00:06:22.572 passed 00:06:22.572 00:06:22.572 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.572 suites 1 1 n/a 0 0 00:06:22.572 tests 1 1 1 0 0 00:06:22.572 asserts 15 15 15 0 n/a 00:06:22.572 00:06:22.572 Elapsed time = 0.015 seconds 00:06:22.572 00:06:22.572 real 0m0.218s 00:06:22.572 user 0m0.042s 00:06:22.572 sys 0m0.072s 00:06:22.572 20:19:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.572 20:19:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:22.572 ************************************ 00:06:22.572 END TEST env_mem_callbacks 00:06:22.572 ************************************ 00:06:22.830 00:06:22.830 real 0m3.549s 00:06:22.830 user 0m1.670s 00:06:22.830 sys 0m1.537s 00:06:22.830 20:19:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.830 20:19:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.830 ************************************ 00:06:22.830 END TEST env 00:06:22.830 ************************************ 00:06:22.830 20:19:16 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:22.830 20:19:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.830 20:19:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.830 20:19:16 -- common/autotest_common.sh@10 -- # set +x 00:06:22.830 ************************************ 00:06:22.830 START TEST rpc 00:06:22.830 ************************************ 00:06:22.830 20:19:16 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:22.830 * Looking for test storage... 00:06:22.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:22.830 20:19:16 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.830 20:19:16 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.830 20:19:16 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.087 20:19:16 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.087 20:19:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.087 20:19:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.087 20:19:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.087 20:19:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.087 20:19:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.087 20:19:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.087 20:19:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.087 20:19:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.087 20:19:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.087 20:19:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.087 20:19:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.087 20:19:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:23.087 20:19:16 rpc -- scripts/common.sh@345 -- # : 1 00:06:23.087 20:19:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.087 20:19:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.087 20:19:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:23.087 20:19:16 rpc -- scripts/common.sh@353 -- # local d=1 00:06:23.087 20:19:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.087 20:19:16 rpc -- scripts/common.sh@355 -- # echo 1 00:06:23.087 20:19:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.087 20:19:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:23.087 20:19:16 rpc -- scripts/common.sh@353 -- # local d=2 00:06:23.087 20:19:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.087 20:19:16 rpc -- scripts/common.sh@355 -- # echo 2 00:06:23.087 20:19:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.087 20:19:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.087 20:19:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.087 20:19:16 rpc -- scripts/common.sh@368 -- # return 0 00:06:23.087 20:19:16 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.088 --rc genhtml_branch_coverage=1 00:06:23.088 --rc genhtml_function_coverage=1 00:06:23.088 --rc genhtml_legend=1 00:06:23.088 --rc geninfo_all_blocks=1 00:06:23.088 --rc geninfo_unexecuted_blocks=1 00:06:23.088 00:06:23.088 ' 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.088 --rc genhtml_branch_coverage=1 00:06:23.088 --rc genhtml_function_coverage=1 00:06:23.088 --rc genhtml_legend=1 00:06:23.088 --rc geninfo_all_blocks=1 00:06:23.088 --rc geninfo_unexecuted_blocks=1 00:06:23.088 00:06:23.088 ' 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.088 --rc genhtml_branch_coverage=1 00:06:23.088 --rc genhtml_function_coverage=1 00:06:23.088 --rc genhtml_legend=1 00:06:23.088 --rc geninfo_all_blocks=1 00:06:23.088 --rc geninfo_unexecuted_blocks=1 00:06:23.088 00:06:23.088 ' 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.088 --rc genhtml_branch_coverage=1 00:06:23.088 --rc genhtml_function_coverage=1 00:06:23.088 --rc genhtml_legend=1 00:06:23.088 --rc geninfo_all_blocks=1 00:06:23.088 --rc geninfo_unexecuted_blocks=1 00:06:23.088 00:06:23.088 ' 00:06:23.088 20:19:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69380 00:06:23.088 20:19:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:23.088 20:19:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.088 20:19:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69380 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@831 -- # '[' -z 69380 ']' 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.088 20:19:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.088 [2024-11-26 20:19:16.549361] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.088 [2024-11-26 20:19:16.549498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69380 ] 00:06:23.346 [2024-11-26 20:19:16.714404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.346 [2024-11-26 20:19:16.795341] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:23.346 [2024-11-26 20:19:16.795502] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69380' to capture a snapshot of events at runtime. 00:06:23.346 [2024-11-26 20:19:16.795521] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.346 [2024-11-26 20:19:16.795538] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.346 [2024-11-26 20:19:16.795553] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69380 for offline analysis/debug. 00:06:23.346 [2024-11-26 20:19:16.795598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.912 20:19:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.912 20:19:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.912 20:19:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:23.912 20:19:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:23.912 20:19:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:23.912 20:19:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:23.912 20:19:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.912 20:19:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.912 20:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.912 ************************************ 00:06:23.912 START TEST rpc_integrity 00:06:23.912 ************************************ 00:06:23.912 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:23.912 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:23.912 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.912 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:24.170 { 00:06:24.170 "name": "Malloc0", 00:06:24.170 "aliases": [ 00:06:24.170 "4b27f88c-322c-49ac-bf91-78c1837f5eb3" 00:06:24.170 ], 00:06:24.170 "product_name": "Malloc disk", 00:06:24.170 "block_size": 512, 00:06:24.170 "num_blocks": 16384, 00:06:24.170 "uuid": "4b27f88c-322c-49ac-bf91-78c1837f5eb3", 00:06:24.170 "assigned_rate_limits": { 00:06:24.170 "rw_ios_per_sec": 0, 00:06:24.170 "rw_mbytes_per_sec": 0, 00:06:24.170 "r_mbytes_per_sec": 0, 00:06:24.170 "w_mbytes_per_sec": 0 00:06:24.170 }, 00:06:24.170 "claimed": false, 00:06:24.170 "zoned": false, 00:06:24.170 "supported_io_types": { 00:06:24.170 "read": true, 00:06:24.170 "write": true, 00:06:24.170 "unmap": true, 00:06:24.170 "flush": true, 00:06:24.170 "reset": true, 00:06:24.170 "nvme_admin": false, 00:06:24.170 "nvme_io": false, 00:06:24.170 "nvme_io_md": false, 00:06:24.170 "write_zeroes": true, 00:06:24.170 "zcopy": true, 00:06:24.170 "get_zone_info": false, 00:06:24.170 "zone_management": false, 00:06:24.170 "zone_append": false, 00:06:24.170 "compare": false, 00:06:24.170 "compare_and_write": false, 00:06:24.170 "abort": true, 00:06:24.170 "seek_hole": false, 00:06:24.170 "seek_data": false, 00:06:24.170 "copy": true, 00:06:24.170 "nvme_iov_md": false 00:06:24.170 }, 00:06:24.170 "memory_domains": [ 00:06:24.170 { 00:06:24.170 "dma_device_id": "system", 00:06:24.170 "dma_device_type": 1 00:06:24.170 }, 00:06:24.170 { 00:06:24.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.170 "dma_device_type": 2 00:06:24.170 } 00:06:24.170 ], 00:06:24.170 "driver_specific": {} 00:06:24.170 } 00:06:24.170 ]' 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.170 [2024-11-26 20:19:17.620250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:24.170 [2024-11-26 20:19:17.620346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.170 [2024-11-26 20:19:17.620385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:24.170 [2024-11-26 20:19:17.620396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.170 [2024-11-26 20:19:17.623012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.170 [2024-11-26 20:19:17.623115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:24.170 Passthru0 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.170 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.170 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:24.170 { 00:06:24.170 "name": "Malloc0", 00:06:24.170 "aliases": [ 00:06:24.170 "4b27f88c-322c-49ac-bf91-78c1837f5eb3" 00:06:24.170 ], 00:06:24.170 "product_name": "Malloc disk", 00:06:24.170 "block_size": 512, 00:06:24.170 "num_blocks": 16384, 00:06:24.170 "uuid": "4b27f88c-322c-49ac-bf91-78c1837f5eb3", 00:06:24.170 "assigned_rate_limits": { 00:06:24.170 "rw_ios_per_sec": 0, 00:06:24.170 "rw_mbytes_per_sec": 0, 00:06:24.170 "r_mbytes_per_sec": 0, 00:06:24.170 "w_mbytes_per_sec": 0 00:06:24.170 }, 00:06:24.170 "claimed": true, 00:06:24.170 "claim_type": "exclusive_write", 00:06:24.170 "zoned": false, 00:06:24.170 "supported_io_types": { 00:06:24.170 "read": true, 00:06:24.170 "write": true, 00:06:24.170 "unmap": true, 00:06:24.170 "flush": true, 00:06:24.170 "reset": true, 00:06:24.170 "nvme_admin": false, 00:06:24.170 "nvme_io": false, 00:06:24.170 "nvme_io_md": false, 00:06:24.170 "write_zeroes": true, 00:06:24.170 "zcopy": true, 00:06:24.170 "get_zone_info": false, 00:06:24.170 "zone_management": false, 00:06:24.170 "zone_append": false, 00:06:24.170 "compare": false, 00:06:24.170 "compare_and_write": false, 00:06:24.170 "abort": true, 00:06:24.170 "seek_hole": false, 00:06:24.170 "seek_data": false, 00:06:24.170 "copy": true, 00:06:24.170 "nvme_iov_md": false 00:06:24.170 }, 00:06:24.170 "memory_domains": [ 00:06:24.170 { 00:06:24.170 "dma_device_id": "system", 00:06:24.170 "dma_device_type": 1 00:06:24.170 }, 00:06:24.170 { 00:06:24.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.170 "dma_device_type": 2 00:06:24.170 } 00:06:24.170 ], 00:06:24.170 "driver_specific": {} 00:06:24.170 }, 00:06:24.170 { 00:06:24.170 "name": "Passthru0", 00:06:24.170 "aliases": [ 00:06:24.170 "2ec6c32d-2cf0-5bff-81e8-a54ee528e85f" 00:06:24.170 ], 00:06:24.170 "product_name": "passthru", 00:06:24.170 "block_size": 512, 00:06:24.170 "num_blocks": 16384, 00:06:24.170 "uuid": "2ec6c32d-2cf0-5bff-81e8-a54ee528e85f", 00:06:24.170 "assigned_rate_limits": { 00:06:24.170 "rw_ios_per_sec": 0, 00:06:24.170 "rw_mbytes_per_sec": 0, 00:06:24.170 "r_mbytes_per_sec": 0, 00:06:24.170 "w_mbytes_per_sec": 0 00:06:24.170 }, 00:06:24.170 "claimed": false, 00:06:24.170 "zoned": false, 00:06:24.170 "supported_io_types": { 00:06:24.170 "read": true, 00:06:24.170 "write": true, 00:06:24.170 "unmap": true, 00:06:24.170 "flush": true, 00:06:24.170 "reset": true, 00:06:24.171 "nvme_admin": false, 00:06:24.171 "nvme_io": false, 00:06:24.171 "nvme_io_md": false, 00:06:24.171 "write_zeroes": true, 00:06:24.171 "zcopy": true, 00:06:24.171 "get_zone_info": false, 00:06:24.171 "zone_management": false, 00:06:24.171 "zone_append": false, 00:06:24.171 "compare": false, 00:06:24.171 "compare_and_write": false, 00:06:24.171 "abort": true, 00:06:24.171 "seek_hole": false, 00:06:24.171 "seek_data": false, 00:06:24.171 "copy": true, 00:06:24.171 "nvme_iov_md": false 00:06:24.171 }, 00:06:24.171 "memory_domains": [ 00:06:24.171 { 00:06:24.171 "dma_device_id": "system", 00:06:24.171 "dma_device_type": 1 00:06:24.171 }, 00:06:24.171 { 00:06:24.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.171 "dma_device_type": 2 00:06:24.171 } 00:06:24.171 ], 00:06:24.171 "driver_specific": { 00:06:24.171 "passthru": { 00:06:24.171 "name": "Passthru0", 00:06:24.171 "base_bdev_name": "Malloc0" 00:06:24.171 } 00:06:24.171 } 00:06:24.171 } 00:06:24.171 ]' 00:06:24.171 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:24.171 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:24.171 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:24.171 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.171 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.429 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.429 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.429 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:24.429 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:24.429 ************************************ 00:06:24.429 END TEST rpc_integrity 00:06:24.429 ************************************ 00:06:24.429 20:19:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:24.429 00:06:24.429 real 0m0.344s 00:06:24.429 user 0m0.204s 00:06:24.429 sys 0m0.062s 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.429 20:19:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.429 20:19:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:24.429 20:19:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.429 20:19:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.429 20:19:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.430 ************************************ 00:06:24.430 START TEST rpc_plugins 00:06:24.430 ************************************ 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:24.430 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.430 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:24.430 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.430 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:24.430 { 00:06:24.430 "name": "Malloc1", 00:06:24.430 "aliases": [ 00:06:24.430 "fbfd4e52-41f5-4eb9-b01c-d3fb3834a0f6" 00:06:24.430 ], 00:06:24.430 "product_name": "Malloc disk", 00:06:24.430 "block_size": 4096, 00:06:24.430 "num_blocks": 256, 00:06:24.430 "uuid": "fbfd4e52-41f5-4eb9-b01c-d3fb3834a0f6", 00:06:24.430 "assigned_rate_limits": { 00:06:24.430 "rw_ios_per_sec": 0, 00:06:24.430 "rw_mbytes_per_sec": 0, 00:06:24.430 "r_mbytes_per_sec": 0, 00:06:24.430 "w_mbytes_per_sec": 0 00:06:24.430 }, 00:06:24.430 "claimed": false, 00:06:24.430 "zoned": false, 00:06:24.430 "supported_io_types": { 00:06:24.430 "read": true, 00:06:24.430 "write": true, 00:06:24.430 "unmap": true, 00:06:24.430 "flush": true, 00:06:24.430 "reset": true, 00:06:24.430 "nvme_admin": false, 00:06:24.430 "nvme_io": false, 00:06:24.430 "nvme_io_md": false, 00:06:24.430 "write_zeroes": true, 00:06:24.430 "zcopy": true, 00:06:24.430 "get_zone_info": false, 00:06:24.430 "zone_management": false, 00:06:24.430 "zone_append": false, 00:06:24.430 "compare": false, 00:06:24.430 "compare_and_write": false, 00:06:24.430 "abort": true, 00:06:24.430 "seek_hole": false, 00:06:24.430 "seek_data": false, 00:06:24.430 "copy": true, 00:06:24.430 "nvme_iov_md": false 00:06:24.430 }, 00:06:24.430 "memory_domains": [ 00:06:24.430 { 00:06:24.430 "dma_device_id": "system", 00:06:24.430 "dma_device_type": 1 00:06:24.430 }, 00:06:24.430 { 00:06:24.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.430 "dma_device_type": 2 00:06:24.430 } 00:06:24.430 ], 00:06:24.430 "driver_specific": {} 00:06:24.430 } 00:06:24.430 ]' 00:06:24.430 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:24.430 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:24.430 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.430 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.430 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.689 20:19:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.689 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:24.689 20:19:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:24.689 ************************************ 00:06:24.689 END TEST rpc_plugins 00:06:24.689 ************************************ 00:06:24.689 20:19:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:24.689 00:06:24.689 real 0m0.169s 00:06:24.689 user 0m0.104s 00:06:24.689 sys 0m0.025s 00:06:24.689 20:19:18 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.689 20:19:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:24.689 20:19:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:24.689 20:19:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.689 20:19:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.689 20:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.689 ************************************ 00:06:24.689 START TEST rpc_trace_cmd_test 00:06:24.689 ************************************ 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:24.689 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69380", 00:06:24.689 "tpoint_group_mask": "0x8", 00:06:24.689 "iscsi_conn": { 00:06:24.689 "mask": "0x2", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "scsi": { 00:06:24.689 "mask": "0x4", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "bdev": { 00:06:24.689 "mask": "0x8", 00:06:24.689 "tpoint_mask": "0xffffffffffffffff" 00:06:24.689 }, 00:06:24.689 "nvmf_rdma": { 00:06:24.689 "mask": "0x10", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "nvmf_tcp": { 00:06:24.689 "mask": "0x20", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "ftl": { 00:06:24.689 "mask": "0x40", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "blobfs": { 00:06:24.689 "mask": "0x80", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "dsa": { 00:06:24.689 "mask": "0x200", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "thread": { 00:06:24.689 "mask": "0x400", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "nvme_pcie": { 00:06:24.689 "mask": "0x800", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "iaa": { 00:06:24.689 "mask": "0x1000", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "nvme_tcp": { 00:06:24.689 "mask": "0x2000", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "bdev_nvme": { 00:06:24.689 "mask": "0x4000", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "sock": { 00:06:24.689 "mask": "0x8000", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "blob": { 00:06:24.689 "mask": "0x10000", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 }, 00:06:24.689 "bdev_raid": { 00:06:24.689 "mask": "0x20000", 00:06:24.689 "tpoint_mask": "0x0" 00:06:24.689 } 00:06:24.689 }' 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:24.689 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:24.948 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:24.948 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:24.948 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:24.949 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:24.949 ************************************ 00:06:24.949 END TEST rpc_trace_cmd_test 00:06:24.949 ************************************ 00:06:24.949 20:19:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:24.949 00:06:24.949 real 0m0.255s 00:06:24.949 user 0m0.204s 00:06:24.949 sys 0m0.039s 00:06:24.949 20:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.949 20:19:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.949 20:19:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:24.949 20:19:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:24.949 20:19:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:24.949 20:19:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.949 20:19:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.949 20:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.949 ************************************ 00:06:24.949 START TEST rpc_daemon_integrity 00:06:24.949 ************************************ 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.949 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:25.214 { 00:06:25.214 "name": "Malloc2", 00:06:25.214 "aliases": [ 00:06:25.214 "5cd1947f-9643-48b5-bb7d-a2d16f23082b" 00:06:25.214 ], 00:06:25.214 "product_name": "Malloc disk", 00:06:25.214 "block_size": 512, 00:06:25.214 "num_blocks": 16384, 00:06:25.214 "uuid": "5cd1947f-9643-48b5-bb7d-a2d16f23082b", 00:06:25.214 "assigned_rate_limits": { 00:06:25.214 "rw_ios_per_sec": 0, 00:06:25.214 "rw_mbytes_per_sec": 0, 00:06:25.214 "r_mbytes_per_sec": 0, 00:06:25.214 "w_mbytes_per_sec": 0 00:06:25.214 }, 00:06:25.214 "claimed": false, 00:06:25.214 "zoned": false, 00:06:25.214 "supported_io_types": { 00:06:25.214 "read": true, 00:06:25.214 "write": true, 00:06:25.214 "unmap": true, 00:06:25.214 "flush": true, 00:06:25.214 "reset": true, 00:06:25.214 "nvme_admin": false, 00:06:25.214 "nvme_io": false, 00:06:25.214 "nvme_io_md": false, 00:06:25.214 "write_zeroes": true, 00:06:25.214 "zcopy": true, 00:06:25.214 "get_zone_info": false, 00:06:25.214 "zone_management": false, 00:06:25.214 "zone_append": false, 00:06:25.214 "compare": false, 00:06:25.214 "compare_and_write": false, 00:06:25.214 "abort": true, 00:06:25.214 "seek_hole": false, 00:06:25.214 "seek_data": false, 00:06:25.214 "copy": true, 00:06:25.214 "nvme_iov_md": false 00:06:25.214 }, 00:06:25.214 "memory_domains": [ 00:06:25.214 { 00:06:25.214 "dma_device_id": "system", 00:06:25.214 "dma_device_type": 1 00:06:25.214 }, 00:06:25.214 { 00:06:25.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.214 "dma_device_type": 2 00:06:25.214 } 00:06:25.214 ], 00:06:25.214 "driver_specific": {} 00:06:25.214 } 00:06:25.214 ]' 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.214 [2024-11-26 20:19:18.569461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:25.214 [2024-11-26 20:19:18.569684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:25.214 [2024-11-26 20:19:18.569747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:25.214 [2024-11-26 20:19:18.569786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:25.214 [2024-11-26 20:19:18.572569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:25.214 [2024-11-26 20:19:18.572679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:25.214 Passthru0 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.214 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:25.214 { 00:06:25.214 "name": "Malloc2", 00:06:25.214 "aliases": [ 00:06:25.214 "5cd1947f-9643-48b5-bb7d-a2d16f23082b" 00:06:25.214 ], 00:06:25.214 "product_name": "Malloc disk", 00:06:25.214 "block_size": 512, 00:06:25.214 "num_blocks": 16384, 00:06:25.214 "uuid": "5cd1947f-9643-48b5-bb7d-a2d16f23082b", 00:06:25.214 "assigned_rate_limits": { 00:06:25.214 "rw_ios_per_sec": 0, 00:06:25.214 "rw_mbytes_per_sec": 0, 00:06:25.214 "r_mbytes_per_sec": 0, 00:06:25.214 "w_mbytes_per_sec": 0 00:06:25.214 }, 00:06:25.214 "claimed": true, 00:06:25.214 "claim_type": "exclusive_write", 00:06:25.214 "zoned": false, 00:06:25.214 "supported_io_types": { 00:06:25.214 "read": true, 00:06:25.214 "write": true, 00:06:25.214 "unmap": true, 00:06:25.214 "flush": true, 00:06:25.214 "reset": true, 00:06:25.214 "nvme_admin": false, 00:06:25.214 "nvme_io": false, 00:06:25.214 "nvme_io_md": false, 00:06:25.214 "write_zeroes": true, 00:06:25.214 "zcopy": true, 00:06:25.214 "get_zone_info": false, 00:06:25.214 "zone_management": false, 00:06:25.214 "zone_append": false, 00:06:25.214 "compare": false, 00:06:25.214 "compare_and_write": false, 00:06:25.214 "abort": true, 00:06:25.214 "seek_hole": false, 00:06:25.214 "seek_data": false, 00:06:25.214 "copy": true, 00:06:25.214 "nvme_iov_md": false 00:06:25.214 }, 00:06:25.214 "memory_domains": [ 00:06:25.214 { 00:06:25.214 "dma_device_id": "system", 00:06:25.214 "dma_device_type": 1 00:06:25.214 }, 00:06:25.214 { 00:06:25.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.214 "dma_device_type": 2 00:06:25.214 } 00:06:25.214 ], 00:06:25.214 "driver_specific": {} 00:06:25.214 }, 00:06:25.214 { 00:06:25.214 "name": "Passthru0", 00:06:25.214 "aliases": [ 00:06:25.214 "34108e0e-014b-5168-947c-4f7be7431e49" 00:06:25.214 ], 00:06:25.214 "product_name": "passthru", 00:06:25.214 "block_size": 512, 00:06:25.214 "num_blocks": 16384, 00:06:25.214 "uuid": "34108e0e-014b-5168-947c-4f7be7431e49", 00:06:25.214 "assigned_rate_limits": { 00:06:25.214 "rw_ios_per_sec": 0, 00:06:25.214 "rw_mbytes_per_sec": 0, 00:06:25.214 "r_mbytes_per_sec": 0, 00:06:25.214 "w_mbytes_per_sec": 0 00:06:25.214 }, 00:06:25.214 "claimed": false, 00:06:25.214 "zoned": false, 00:06:25.214 "supported_io_types": { 00:06:25.214 "read": true, 00:06:25.214 "write": true, 00:06:25.214 "unmap": true, 00:06:25.214 "flush": true, 00:06:25.214 "reset": true, 00:06:25.215 "nvme_admin": false, 00:06:25.215 "nvme_io": false, 00:06:25.215 "nvme_io_md": false, 00:06:25.215 "write_zeroes": true, 00:06:25.215 "zcopy": true, 00:06:25.215 "get_zone_info": false, 00:06:25.215 "zone_management": false, 00:06:25.215 "zone_append": false, 00:06:25.215 "compare": false, 00:06:25.215 "compare_and_write": false, 00:06:25.215 "abort": true, 00:06:25.215 "seek_hole": false, 00:06:25.215 "seek_data": false, 00:06:25.215 "copy": true, 00:06:25.215 "nvme_iov_md": false 00:06:25.215 }, 00:06:25.215 "memory_domains": [ 00:06:25.215 { 00:06:25.215 "dma_device_id": "system", 00:06:25.215 "dma_device_type": 1 00:06:25.215 }, 00:06:25.215 { 00:06:25.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:25.215 "dma_device_type": 2 00:06:25.215 } 00:06:25.215 ], 00:06:25.215 "driver_specific": { 00:06:25.215 "passthru": { 00:06:25.215 "name": "Passthru0", 00:06:25.215 "base_bdev_name": "Malloc2" 00:06:25.215 } 00:06:25.215 } 00:06:25.215 } 00:06:25.215 ]' 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:25.215 ************************************ 00:06:25.215 END TEST rpc_daemon_integrity 00:06:25.215 ************************************ 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:25.215 00:06:25.215 real 0m0.319s 00:06:25.215 user 0m0.194s 00:06:25.215 sys 0m0.054s 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.215 20:19:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.474 20:19:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:25.474 20:19:18 rpc -- rpc/rpc.sh@84 -- # killprocess 69380 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@950 -- # '[' -z 69380 ']' 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@954 -- # kill -0 69380 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@955 -- # uname 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69380 00:06:25.474 killing process with pid 69380 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69380' 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@969 -- # kill 69380 00:06:25.474 20:19:18 rpc -- common/autotest_common.sh@974 -- # wait 69380 00:06:26.041 ************************************ 00:06:26.041 END TEST rpc 00:06:26.041 ************************************ 00:06:26.041 00:06:26.041 real 0m3.122s 00:06:26.041 user 0m3.704s 00:06:26.041 sys 0m0.983s 00:06:26.041 20:19:19 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.041 20:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.041 20:19:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:26.041 20:19:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.041 20:19:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.041 20:19:19 -- common/autotest_common.sh@10 -- # set +x 00:06:26.041 ************************************ 00:06:26.041 START TEST skip_rpc 00:06:26.041 ************************************ 00:06:26.041 20:19:19 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:26.041 * Looking for test storage... 00:06:26.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:26.041 20:19:19 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:26.041 20:19:19 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:26.041 20:19:19 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.301 20:19:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:26.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.301 --rc genhtml_branch_coverage=1 00:06:26.301 --rc genhtml_function_coverage=1 00:06:26.301 --rc genhtml_legend=1 00:06:26.301 --rc geninfo_all_blocks=1 00:06:26.301 --rc geninfo_unexecuted_blocks=1 00:06:26.301 00:06:26.301 ' 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:26.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.301 --rc genhtml_branch_coverage=1 00:06:26.301 --rc genhtml_function_coverage=1 00:06:26.301 --rc genhtml_legend=1 00:06:26.301 --rc geninfo_all_blocks=1 00:06:26.301 --rc geninfo_unexecuted_blocks=1 00:06:26.301 00:06:26.301 ' 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:26.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.301 --rc genhtml_branch_coverage=1 00:06:26.301 --rc genhtml_function_coverage=1 00:06:26.301 --rc genhtml_legend=1 00:06:26.301 --rc geninfo_all_blocks=1 00:06:26.301 --rc geninfo_unexecuted_blocks=1 00:06:26.301 00:06:26.301 ' 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:26.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.301 --rc genhtml_branch_coverage=1 00:06:26.301 --rc genhtml_function_coverage=1 00:06:26.301 --rc genhtml_legend=1 00:06:26.301 --rc geninfo_all_blocks=1 00:06:26.301 --rc geninfo_unexecuted_blocks=1 00:06:26.301 00:06:26.301 ' 00:06:26.301 20:19:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:26.301 20:19:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:26.301 20:19:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.301 20:19:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.301 ************************************ 00:06:26.301 START TEST skip_rpc 00:06:26.301 ************************************ 00:06:26.301 20:19:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:26.301 20:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69587 00:06:26.301 20:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:26.301 20:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.301 20:19:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:26.301 [2024-11-26 20:19:19.747234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:26.301 [2024-11-26 20:19:19.747893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69587 ] 00:06:26.561 [2024-11-26 20:19:19.911149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.561 [2024-11-26 20:19:19.968736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.851 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69587 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69587 ']' 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69587 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69587 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69587' 00:06:31.852 killing process with pid 69587 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69587 00:06:31.852 20:19:24 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69587 00:06:31.852 00:06:31.852 real 0m5.558s 00:06:31.852 user 0m5.076s 00:06:31.852 sys 0m0.405s 00:06:31.852 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.852 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.852 ************************************ 00:06:31.852 END TEST skip_rpc 00:06:31.852 ************************************ 00:06:31.852 20:19:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:31.852 20:19:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.852 20:19:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.852 20:19:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.852 ************************************ 00:06:31.852 START TEST skip_rpc_with_json 00:06:31.852 ************************************ 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69674 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69674 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69674 ']' 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.852 20:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:31.852 [2024-11-26 20:19:25.364316] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.852 [2024-11-26 20:19:25.364555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69674 ] 00:06:32.110 [2024-11-26 20:19:25.524646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.110 [2024-11-26 20:19:25.579233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.047 [2024-11-26 20:19:26.233762] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:33.047 request: 00:06:33.047 { 00:06:33.047 "trtype": "tcp", 00:06:33.047 "method": "nvmf_get_transports", 00:06:33.047 "req_id": 1 00:06:33.047 } 00:06:33.047 Got JSON-RPC error response 00:06:33.047 response: 00:06:33.047 { 00:06:33.047 "code": -19, 00:06:33.047 "message": "No such device" 00:06:33.047 } 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.047 [2024-11-26 20:19:26.245867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.047 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.047 { 00:06:33.047 "subsystems": [ 00:06:33.047 { 00:06:33.047 "subsystem": "fsdev", 00:06:33.047 "config": [ 00:06:33.047 { 00:06:33.047 "method": "fsdev_set_opts", 00:06:33.047 "params": { 00:06:33.047 "fsdev_io_pool_size": 65535, 00:06:33.047 "fsdev_io_cache_size": 256 00:06:33.047 } 00:06:33.047 } 00:06:33.047 ] 00:06:33.047 }, 00:06:33.047 { 00:06:33.047 "subsystem": "keyring", 00:06:33.047 "config": [] 00:06:33.047 }, 00:06:33.047 { 00:06:33.047 "subsystem": "iobuf", 00:06:33.047 "config": [ 00:06:33.047 { 00:06:33.047 "method": "iobuf_set_options", 00:06:33.047 "params": { 00:06:33.047 "small_pool_count": 8192, 00:06:33.047 "large_pool_count": 1024, 00:06:33.047 "small_bufsize": 8192, 00:06:33.047 "large_bufsize": 135168 00:06:33.047 } 00:06:33.047 } 00:06:33.047 ] 00:06:33.047 }, 00:06:33.047 { 00:06:33.047 "subsystem": "sock", 00:06:33.047 "config": [ 00:06:33.047 { 00:06:33.047 "method": "sock_set_default_impl", 00:06:33.047 "params": { 00:06:33.047 "impl_name": "posix" 00:06:33.047 } 00:06:33.047 }, 00:06:33.047 { 00:06:33.047 "method": "sock_impl_set_options", 00:06:33.047 "params": { 00:06:33.047 "impl_name": "ssl", 00:06:33.047 "recv_buf_size": 4096, 00:06:33.047 "send_buf_size": 4096, 00:06:33.047 "enable_recv_pipe": true, 00:06:33.047 "enable_quickack": false, 00:06:33.047 "enable_placement_id": 0, 00:06:33.047 "enable_zerocopy_send_server": true, 00:06:33.047 "enable_zerocopy_send_client": false, 00:06:33.048 "zerocopy_threshold": 0, 00:06:33.048 "tls_version": 0, 00:06:33.048 "enable_ktls": false 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "sock_impl_set_options", 00:06:33.048 "params": { 00:06:33.048 "impl_name": "posix", 00:06:33.048 "recv_buf_size": 2097152, 00:06:33.048 "send_buf_size": 2097152, 00:06:33.048 "enable_recv_pipe": true, 00:06:33.048 "enable_quickack": false, 00:06:33.048 "enable_placement_id": 0, 00:06:33.048 "enable_zerocopy_send_server": true, 00:06:33.048 "enable_zerocopy_send_client": false, 00:06:33.048 "zerocopy_threshold": 0, 00:06:33.048 "tls_version": 0, 00:06:33.048 "enable_ktls": false 00:06:33.048 } 00:06:33.048 } 00:06:33.048 ] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "vmd", 00:06:33.048 "config": [] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "accel", 00:06:33.048 "config": [ 00:06:33.048 { 00:06:33.048 "method": "accel_set_options", 00:06:33.048 "params": { 00:06:33.048 "small_cache_size": 128, 00:06:33.048 "large_cache_size": 16, 00:06:33.048 "task_count": 2048, 00:06:33.048 "sequence_count": 2048, 00:06:33.048 "buf_count": 2048 00:06:33.048 } 00:06:33.048 } 00:06:33.048 ] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "bdev", 00:06:33.048 "config": [ 00:06:33.048 { 00:06:33.048 "method": "bdev_set_options", 00:06:33.048 "params": { 00:06:33.048 "bdev_io_pool_size": 65535, 00:06:33.048 "bdev_io_cache_size": 256, 00:06:33.048 "bdev_auto_examine": true, 00:06:33.048 "iobuf_small_cache_size": 128, 00:06:33.048 "iobuf_large_cache_size": 16 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "bdev_raid_set_options", 00:06:33.048 "params": { 00:06:33.048 "process_window_size_kb": 1024, 00:06:33.048 "process_max_bandwidth_mb_sec": 0 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "bdev_iscsi_set_options", 00:06:33.048 "params": { 00:06:33.048 "timeout_sec": 30 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "bdev_nvme_set_options", 00:06:33.048 "params": { 00:06:33.048 "action_on_timeout": "none", 00:06:33.048 "timeout_us": 0, 00:06:33.048 "timeout_admin_us": 0, 00:06:33.048 "keep_alive_timeout_ms": 10000, 00:06:33.048 "arbitration_burst": 0, 00:06:33.048 "low_priority_weight": 0, 00:06:33.048 "medium_priority_weight": 0, 00:06:33.048 "high_priority_weight": 0, 00:06:33.048 "nvme_adminq_poll_period_us": 10000, 00:06:33.048 "nvme_ioq_poll_period_us": 0, 00:06:33.048 "io_queue_requests": 0, 00:06:33.048 "delay_cmd_submit": true, 00:06:33.048 "transport_retry_count": 4, 00:06:33.048 "bdev_retry_count": 3, 00:06:33.048 "transport_ack_timeout": 0, 00:06:33.048 "ctrlr_loss_timeout_sec": 0, 00:06:33.048 "reconnect_delay_sec": 0, 00:06:33.048 "fast_io_fail_timeout_sec": 0, 00:06:33.048 "disable_auto_failback": false, 00:06:33.048 "generate_uuids": false, 00:06:33.048 "transport_tos": 0, 00:06:33.048 "nvme_error_stat": false, 00:06:33.048 "rdma_srq_size": 0, 00:06:33.048 "io_path_stat": false, 00:06:33.048 "allow_accel_sequence": false, 00:06:33.048 "rdma_max_cq_size": 0, 00:06:33.048 "rdma_cm_event_timeout_ms": 0, 00:06:33.048 "dhchap_digests": [ 00:06:33.048 "sha256", 00:06:33.048 "sha384", 00:06:33.048 "sha512" 00:06:33.048 ], 00:06:33.048 "dhchap_dhgroups": [ 00:06:33.048 "null", 00:06:33.048 "ffdhe2048", 00:06:33.048 "ffdhe3072", 00:06:33.048 "ffdhe4096", 00:06:33.048 "ffdhe6144", 00:06:33.048 "ffdhe8192" 00:06:33.048 ] 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "bdev_nvme_set_hotplug", 00:06:33.048 "params": { 00:06:33.048 "period_us": 100000, 00:06:33.048 "enable": false 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "bdev_wait_for_examine" 00:06:33.048 } 00:06:33.048 ] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "scsi", 00:06:33.048 "config": null 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "scheduler", 00:06:33.048 "config": [ 00:06:33.048 { 00:06:33.048 "method": "framework_set_scheduler", 00:06:33.048 "params": { 00:06:33.048 "name": "static" 00:06:33.048 } 00:06:33.048 } 00:06:33.048 ] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "vhost_scsi", 00:06:33.048 "config": [] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "vhost_blk", 00:06:33.048 "config": [] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "ublk", 00:06:33.048 "config": [] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "nbd", 00:06:33.048 "config": [] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "nvmf", 00:06:33.048 "config": [ 00:06:33.048 { 00:06:33.048 "method": "nvmf_set_config", 00:06:33.048 "params": { 00:06:33.048 "discovery_filter": "match_any", 00:06:33.048 "admin_cmd_passthru": { 00:06:33.048 "identify_ctrlr": false 00:06:33.048 }, 00:06:33.048 "dhchap_digests": [ 00:06:33.048 "sha256", 00:06:33.048 "sha384", 00:06:33.048 "sha512" 00:06:33.048 ], 00:06:33.048 "dhchap_dhgroups": [ 00:06:33.048 "null", 00:06:33.048 "ffdhe2048", 00:06:33.048 "ffdhe3072", 00:06:33.048 "ffdhe4096", 00:06:33.048 "ffdhe6144", 00:06:33.048 "ffdhe8192" 00:06:33.048 ] 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "nvmf_set_max_subsystems", 00:06:33.048 "params": { 00:06:33.048 "max_subsystems": 1024 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "nvmf_set_crdt", 00:06:33.048 "params": { 00:06:33.048 "crdt1": 0, 00:06:33.048 "crdt2": 0, 00:06:33.048 "crdt3": 0 00:06:33.048 } 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "method": "nvmf_create_transport", 00:06:33.048 "params": { 00:06:33.048 "trtype": "TCP", 00:06:33.048 "max_queue_depth": 128, 00:06:33.048 "max_io_qpairs_per_ctrlr": 127, 00:06:33.048 "in_capsule_data_size": 4096, 00:06:33.048 "max_io_size": 131072, 00:06:33.048 "io_unit_size": 131072, 00:06:33.048 "max_aq_depth": 128, 00:06:33.048 "num_shared_buffers": 511, 00:06:33.048 "buf_cache_size": 4294967295, 00:06:33.048 "dif_insert_or_strip": false, 00:06:33.048 "zcopy": false, 00:06:33.048 "c2h_success": true, 00:06:33.048 "sock_priority": 0, 00:06:33.048 "abort_timeout_sec": 1, 00:06:33.048 "ack_timeout": 0, 00:06:33.048 "data_wr_pool_size": 0 00:06:33.048 } 00:06:33.048 } 00:06:33.048 ] 00:06:33.048 }, 00:06:33.048 { 00:06:33.048 "subsystem": "iscsi", 00:06:33.048 "config": [ 00:06:33.048 { 00:06:33.048 "method": "iscsi_set_options", 00:06:33.048 "params": { 00:06:33.048 "node_base": "iqn.2016-06.io.spdk", 00:06:33.048 "max_sessions": 128, 00:06:33.048 "max_connections_per_session": 2, 00:06:33.048 "max_queue_depth": 64, 00:06:33.048 "default_time2wait": 2, 00:06:33.048 "default_time2retain": 20, 00:06:33.048 "first_burst_length": 8192, 00:06:33.048 "immediate_data": true, 00:06:33.048 "allow_duplicated_isid": false, 00:06:33.048 "error_recovery_level": 0, 00:06:33.048 "nop_timeout": 60, 00:06:33.048 "nop_in_interval": 30, 00:06:33.048 "disable_chap": false, 00:06:33.048 "require_chap": false, 00:06:33.048 "mutual_chap": false, 00:06:33.048 "chap_group": 0, 00:06:33.048 "max_large_datain_per_connection": 64, 00:06:33.048 "max_r2t_per_connection": 4, 00:06:33.048 "pdu_pool_size": 36864, 00:06:33.048 "immediate_data_pool_size": 16384, 00:06:33.048 "data_out_pool_size": 2048 00:06:33.048 } 00:06:33.048 } 00:06:33.048 ] 00:06:33.048 } 00:06:33.048 ] 00:06:33.048 } 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69674 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69674 ']' 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69674 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69674 00:06:33.048 killing process with pid 69674 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69674' 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69674 00:06:33.048 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69674 00:06:33.616 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69703 00:06:33.616 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.616 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69703 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69703 ']' 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69703 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69703 00:06:38.891 killing process with pid 69703 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69703' 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69703 00:06:38.891 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69703 00:06:39.151 20:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:39.151 20:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:39.151 ************************************ 00:06:39.151 END TEST skip_rpc_with_json 00:06:39.151 ************************************ 00:06:39.151 00:06:39.151 real 0m7.370s 00:06:39.151 user 0m6.801s 00:06:39.151 sys 0m0.877s 00:06:39.151 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.151 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:39.151 20:19:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:39.151 20:19:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.151 20:19:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.151 20:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.151 ************************************ 00:06:39.151 START TEST skip_rpc_with_delay 00:06:39.151 ************************************ 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.411 [2024-11-26 20:19:32.794286] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:39.411 [2024-11-26 20:19:32.795018] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.411 00:06:39.411 real 0m0.157s 00:06:39.411 user 0m0.083s 00:06:39.411 sys 0m0.072s 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.411 ************************************ 00:06:39.411 END TEST skip_rpc_with_delay 00:06:39.411 ************************************ 00:06:39.411 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:39.411 20:19:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:39.411 20:19:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:39.411 20:19:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:39.411 20:19:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.411 20:19:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.411 20:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.411 ************************************ 00:06:39.411 START TEST exit_on_failed_rpc_init 00:06:39.411 ************************************ 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69820 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69820 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69820 ']' 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.411 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 [2024-11-26 20:19:33.019676] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.670 [2024-11-26 20:19:33.019900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69820 ] 00:06:39.670 [2024-11-26 20:19:33.183837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.929 [2024-11-26 20:19:33.265186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:40.499 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.499 [2024-11-26 20:19:33.961082] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:40.499 [2024-11-26 20:19:33.961312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69838 ] 00:06:40.759 [2024-11-26 20:19:34.113089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.759 [2024-11-26 20:19:34.193441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.759 [2024-11-26 20:19:34.193634] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:40.759 [2024-11-26 20:19:34.193688] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:40.759 [2024-11-26 20:19:34.193717] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69820 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69820 ']' 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69820 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69820 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69820' 00:06:41.020 killing process with pid 69820 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69820 00:06:41.020 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69820 00:06:41.620 00:06:41.620 real 0m2.035s 00:06:41.620 user 0m2.164s 00:06:41.620 sys 0m0.605s 00:06:41.620 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.620 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:41.620 ************************************ 00:06:41.620 END TEST exit_on_failed_rpc_init 00:06:41.620 ************************************ 00:06:41.620 20:19:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:41.620 ************************************ 00:06:41.620 END TEST skip_rpc 00:06:41.620 ************************************ 00:06:41.620 00:06:41.620 real 0m15.604s 00:06:41.620 user 0m14.327s 00:06:41.620 sys 0m2.253s 00:06:41.620 20:19:35 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.620 20:19:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.620 20:19:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:41.620 20:19:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.620 20:19:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.620 20:19:35 -- common/autotest_common.sh@10 -- # set +x 00:06:41.620 ************************************ 00:06:41.620 START TEST rpc_client 00:06:41.620 ************************************ 00:06:41.620 20:19:35 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:41.879 * Looking for test storage... 00:06:41.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:41.879 20:19:35 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.879 20:19:35 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.879 20:19:35 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.880 20:19:35 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.880 20:19:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:41.880 20:19:35 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.880 20:19:35 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.880 --rc genhtml_branch_coverage=1 00:06:41.880 --rc genhtml_function_coverage=1 00:06:41.880 --rc genhtml_legend=1 00:06:41.880 --rc geninfo_all_blocks=1 00:06:41.880 --rc geninfo_unexecuted_blocks=1 00:06:41.880 00:06:41.880 ' 00:06:41.880 20:19:35 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.880 --rc genhtml_branch_coverage=1 00:06:41.880 --rc genhtml_function_coverage=1 00:06:41.880 --rc genhtml_legend=1 00:06:41.880 --rc geninfo_all_blocks=1 00:06:41.880 --rc geninfo_unexecuted_blocks=1 00:06:41.880 00:06:41.880 ' 00:06:41.880 20:19:35 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.880 --rc genhtml_branch_coverage=1 00:06:41.880 --rc genhtml_function_coverage=1 00:06:41.880 --rc genhtml_legend=1 00:06:41.880 --rc geninfo_all_blocks=1 00:06:41.880 --rc geninfo_unexecuted_blocks=1 00:06:41.880 00:06:41.880 ' 00:06:41.880 20:19:35 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.880 --rc genhtml_branch_coverage=1 00:06:41.880 --rc genhtml_function_coverage=1 00:06:41.880 --rc genhtml_legend=1 00:06:41.880 --rc geninfo_all_blocks=1 00:06:41.880 --rc geninfo_unexecuted_blocks=1 00:06:41.880 00:06:41.880 ' 00:06:41.880 20:19:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:41.880 OK 00:06:41.880 20:19:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:41.880 ************************************ 00:06:41.880 END TEST rpc_client 00:06:41.880 ************************************ 00:06:41.880 00:06:41.880 real 0m0.269s 00:06:41.880 user 0m0.153s 00:06:41.880 sys 0m0.128s 00:06:41.880 20:19:35 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.880 20:19:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:41.880 20:19:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:41.880 20:19:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.880 20:19:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.880 20:19:35 -- common/autotest_common.sh@10 -- # set +x 00:06:41.880 ************************************ 00:06:41.880 START TEST json_config 00:06:41.880 ************************************ 00:06:41.880 20:19:35 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.139 20:19:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.139 20:19:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.139 20:19:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.139 20:19:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.139 20:19:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.139 20:19:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.139 20:19:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.139 20:19:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.139 20:19:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.139 20:19:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.139 20:19:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.139 20:19:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:42.139 20:19:35 json_config -- scripts/common.sh@345 -- # : 1 00:06:42.139 20:19:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.139 20:19:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.139 20:19:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:42.139 20:19:35 json_config -- scripts/common.sh@353 -- # local d=1 00:06:42.139 20:19:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.139 20:19:35 json_config -- scripts/common.sh@355 -- # echo 1 00:06:42.139 20:19:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.139 20:19:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:42.139 20:19:35 json_config -- scripts/common.sh@353 -- # local d=2 00:06:42.139 20:19:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.139 20:19:35 json_config -- scripts/common.sh@355 -- # echo 2 00:06:42.139 20:19:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.139 20:19:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.139 20:19:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.139 20:19:35 json_config -- scripts/common.sh@368 -- # return 0 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.139 --rc genhtml_branch_coverage=1 00:06:42.139 --rc genhtml_function_coverage=1 00:06:42.139 --rc genhtml_legend=1 00:06:42.139 --rc geninfo_all_blocks=1 00:06:42.139 --rc geninfo_unexecuted_blocks=1 00:06:42.139 00:06:42.139 ' 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.139 --rc genhtml_branch_coverage=1 00:06:42.139 --rc genhtml_function_coverage=1 00:06:42.139 --rc genhtml_legend=1 00:06:42.139 --rc geninfo_all_blocks=1 00:06:42.139 --rc geninfo_unexecuted_blocks=1 00:06:42.139 00:06:42.139 ' 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.139 --rc genhtml_branch_coverage=1 00:06:42.139 --rc genhtml_function_coverage=1 00:06:42.139 --rc genhtml_legend=1 00:06:42.139 --rc geninfo_all_blocks=1 00:06:42.139 --rc geninfo_unexecuted_blocks=1 00:06:42.139 00:06:42.139 ' 00:06:42.139 20:19:35 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.139 --rc genhtml_branch_coverage=1 00:06:42.139 --rc genhtml_function_coverage=1 00:06:42.139 --rc genhtml_legend=1 00:06:42.139 --rc geninfo_all_blocks=1 00:06:42.139 --rc geninfo_unexecuted_blocks=1 00:06:42.139 00:06:42.139 ' 00:06:42.139 20:19:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:42.139 20:19:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:42.139 20:19:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.139 20:19:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.139 20:19:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.139 20:19:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.139 20:19:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d1ebabbf-9595-44ff-861d-4578eb160443 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d1ebabbf-9595-44ff-861d-4578eb160443 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.140 20:19:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.140 20:19:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.140 20:19:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.140 20:19:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.140 20:19:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.140 20:19:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.140 20:19:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.140 20:19:35 json_config -- paths/export.sh@5 -- # export PATH 00:06:42.140 20:19:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@51 -- # : 0 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.140 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.140 20:19:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.140 20:19:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:42.140 20:19:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:42.140 20:19:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:42.140 20:19:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:42.140 20:19:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:42.140 20:19:35 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:42.140 WARNING: No tests are enabled so not running JSON configuration tests 00:06:42.140 20:19:35 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:42.140 00:06:42.140 real 0m0.221s 00:06:42.140 user 0m0.132s 00:06:42.140 sys 0m0.092s 00:06:42.140 20:19:35 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.140 20:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.140 ************************************ 00:06:42.140 END TEST json_config 00:06:42.140 ************************************ 00:06:42.140 20:19:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:42.140 20:19:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.140 20:19:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.140 20:19:35 -- common/autotest_common.sh@10 -- # set +x 00:06:42.399 ************************************ 00:06:42.399 START TEST json_config_extra_key 00:06:42.399 ************************************ 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.399 --rc genhtml_branch_coverage=1 00:06:42.399 --rc genhtml_function_coverage=1 00:06:42.399 --rc genhtml_legend=1 00:06:42.399 --rc geninfo_all_blocks=1 00:06:42.399 --rc geninfo_unexecuted_blocks=1 00:06:42.399 00:06:42.399 ' 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.399 --rc genhtml_branch_coverage=1 00:06:42.399 --rc genhtml_function_coverage=1 00:06:42.399 --rc genhtml_legend=1 00:06:42.399 --rc geninfo_all_blocks=1 00:06:42.399 --rc geninfo_unexecuted_blocks=1 00:06:42.399 00:06:42.399 ' 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.399 --rc genhtml_branch_coverage=1 00:06:42.399 --rc genhtml_function_coverage=1 00:06:42.399 --rc genhtml_legend=1 00:06:42.399 --rc geninfo_all_blocks=1 00:06:42.399 --rc geninfo_unexecuted_blocks=1 00:06:42.399 00:06:42.399 ' 00:06:42.399 20:19:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.399 --rc genhtml_branch_coverage=1 00:06:42.399 --rc genhtml_function_coverage=1 00:06:42.399 --rc genhtml_legend=1 00:06:42.399 --rc geninfo_all_blocks=1 00:06:42.399 --rc geninfo_unexecuted_blocks=1 00:06:42.399 00:06:42.399 ' 00:06:42.399 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d1ebabbf-9595-44ff-861d-4578eb160443 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d1ebabbf-9595-44ff-861d-4578eb160443 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.399 20:19:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.399 20:19:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.399 20:19:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.399 20:19:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.399 20:19:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.399 20:19:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:42.399 20:19:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.400 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.400 20:19:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:42.400 INFO: launching applications... 00:06:42.400 20:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70026 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:42.400 Waiting for target to run... 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70026 /var/tmp/spdk_tgt.sock 00:06:42.400 20:19:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:42.400 20:19:35 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70026 ']' 00:06:42.400 20:19:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:42.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:42.400 20:19:35 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.400 20:19:35 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:42.400 20:19:35 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.400 20:19:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:42.658 [2024-11-26 20:19:35.993531] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.658 [2024-11-26 20:19:35.993805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70026 ] 00:06:43.226 [2024-11-26 20:19:36.483783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.226 [2024-11-26 20:19:36.526096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.484 20:19:36 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.484 00:06:43.484 INFO: shutting down applications... 00:06:43.484 20:19:36 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:43.484 20:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:43.484 20:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70026 ]] 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70026 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70026 00:06:43.484 20:19:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:44.052 20:19:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:44.052 20:19:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.052 20:19:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70026 00:06:44.052 20:19:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:44.311 20:19:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:44.311 20:19:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.311 20:19:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70026 00:06:44.311 20:19:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:44.311 20:19:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:44.311 20:19:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:44.311 20:19:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:44.311 SPDK target shutdown done 00:06:44.311 20:19:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:44.311 Success 00:06:44.311 00:06:44.311 real 0m2.164s 00:06:44.311 user 0m1.494s 00:06:44.311 sys 0m0.583s 00:06:44.569 20:19:37 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.569 20:19:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 ************************************ 00:06:44.569 END TEST json_config_extra_key 00:06:44.569 ************************************ 00:06:44.569 20:19:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:44.569 20:19:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.569 20:19:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.569 20:19:37 -- common/autotest_common.sh@10 -- # set +x 00:06:44.569 ************************************ 00:06:44.569 START TEST alias_rpc 00:06:44.569 ************************************ 00:06:44.569 20:19:37 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:44.569 * Looking for test storage... 00:06:44.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:44.569 20:19:38 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:44.569 20:19:38 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:44.569 20:19:38 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:44.569 20:19:38 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:44.569 20:19:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.569 20:19:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.569 20:19:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.569 20:19:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.569 20:19:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:44.570 20:19:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.828 20:19:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:44.828 20:19:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:44.828 20:19:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.828 20:19:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:44.828 20:19:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.828 20:19:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.828 20:19:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.828 20:19:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.828 --rc genhtml_branch_coverage=1 00:06:44.828 --rc genhtml_function_coverage=1 00:06:44.828 --rc genhtml_legend=1 00:06:44.828 --rc geninfo_all_blocks=1 00:06:44.828 --rc geninfo_unexecuted_blocks=1 00:06:44.828 00:06:44.828 ' 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.828 --rc genhtml_branch_coverage=1 00:06:44.828 --rc genhtml_function_coverage=1 00:06:44.828 --rc genhtml_legend=1 00:06:44.828 --rc geninfo_all_blocks=1 00:06:44.828 --rc geninfo_unexecuted_blocks=1 00:06:44.828 00:06:44.828 ' 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.828 --rc genhtml_branch_coverage=1 00:06:44.828 --rc genhtml_function_coverage=1 00:06:44.828 --rc genhtml_legend=1 00:06:44.828 --rc geninfo_all_blocks=1 00:06:44.828 --rc geninfo_unexecuted_blocks=1 00:06:44.828 00:06:44.828 ' 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:44.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.828 --rc genhtml_branch_coverage=1 00:06:44.828 --rc genhtml_function_coverage=1 00:06:44.828 --rc genhtml_legend=1 00:06:44.828 --rc geninfo_all_blocks=1 00:06:44.828 --rc geninfo_unexecuted_blocks=1 00:06:44.828 00:06:44.828 ' 00:06:44.828 20:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:44.828 20:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70106 00:06:44.828 20:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.828 20:19:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70106 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70106 ']' 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.828 20:19:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.828 [2024-11-26 20:19:38.223798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:44.828 [2024-11-26 20:19:38.224069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70106 ] 00:06:45.086 [2024-11-26 20:19:38.384791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.086 [2024-11-26 20:19:38.464808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.653 20:19:39 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.653 20:19:39 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:45.653 20:19:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:45.911 20:19:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70106 00:06:45.911 20:19:39 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70106 ']' 00:06:45.911 20:19:39 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70106 00:06:45.911 20:19:39 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:45.911 20:19:39 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.912 20:19:39 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70106 00:06:45.912 killing process with pid 70106 00:06:45.912 20:19:39 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.912 20:19:39 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.912 20:19:39 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70106' 00:06:45.912 20:19:39 alias_rpc -- common/autotest_common.sh@969 -- # kill 70106 00:06:45.912 20:19:39 alias_rpc -- common/autotest_common.sh@974 -- # wait 70106 00:06:46.478 ************************************ 00:06:46.478 END TEST alias_rpc 00:06:46.478 ************************************ 00:06:46.478 00:06:46.478 real 0m2.047s 00:06:46.478 user 0m2.028s 00:06:46.478 sys 0m0.614s 00:06:46.478 20:19:39 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.478 20:19:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.478 20:19:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:46.478 20:19:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:46.478 20:19:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.478 20:19:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.478 20:19:40 -- common/autotest_common.sh@10 -- # set +x 00:06:46.478 ************************************ 00:06:46.478 START TEST spdkcli_tcp 00:06:46.478 ************************************ 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:46.738 * Looking for test storage... 00:06:46.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.738 20:19:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.738 --rc genhtml_branch_coverage=1 00:06:46.738 --rc genhtml_function_coverage=1 00:06:46.738 --rc genhtml_legend=1 00:06:46.738 --rc geninfo_all_blocks=1 00:06:46.738 --rc geninfo_unexecuted_blocks=1 00:06:46.738 00:06:46.738 ' 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.738 --rc genhtml_branch_coverage=1 00:06:46.738 --rc genhtml_function_coverage=1 00:06:46.738 --rc genhtml_legend=1 00:06:46.738 --rc geninfo_all_blocks=1 00:06:46.738 --rc geninfo_unexecuted_blocks=1 00:06:46.738 00:06:46.738 ' 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.738 --rc genhtml_branch_coverage=1 00:06:46.738 --rc genhtml_function_coverage=1 00:06:46.738 --rc genhtml_legend=1 00:06:46.738 --rc geninfo_all_blocks=1 00:06:46.738 --rc geninfo_unexecuted_blocks=1 00:06:46.738 00:06:46.738 ' 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.738 --rc genhtml_branch_coverage=1 00:06:46.738 --rc genhtml_function_coverage=1 00:06:46.738 --rc genhtml_legend=1 00:06:46.738 --rc geninfo_all_blocks=1 00:06:46.738 --rc geninfo_unexecuted_blocks=1 00:06:46.738 00:06:46.738 ' 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70191 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:46.738 20:19:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70191 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70191 ']' 00:06:46.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.738 20:19:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.998 [2024-11-26 20:19:40.366843] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.998 [2024-11-26 20:19:40.367461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70191 ] 00:06:46.998 [2024-11-26 20:19:40.531694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.256 [2024-11-26 20:19:40.618227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.256 [2024-11-26 20:19:40.618317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.838 20:19:41 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.838 20:19:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:47.839 20:19:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70208 00:06:47.839 20:19:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:47.839 20:19:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:48.128 [ 00:06:48.128 "bdev_malloc_delete", 00:06:48.128 "bdev_malloc_create", 00:06:48.128 "bdev_null_resize", 00:06:48.128 "bdev_null_delete", 00:06:48.128 "bdev_null_create", 00:06:48.128 "bdev_nvme_cuse_unregister", 00:06:48.128 "bdev_nvme_cuse_register", 00:06:48.128 "bdev_opal_new_user", 00:06:48.128 "bdev_opal_set_lock_state", 00:06:48.128 "bdev_opal_delete", 00:06:48.128 "bdev_opal_get_info", 00:06:48.128 "bdev_opal_create", 00:06:48.128 "bdev_nvme_opal_revert", 00:06:48.128 "bdev_nvme_opal_init", 00:06:48.128 "bdev_nvme_send_cmd", 00:06:48.128 "bdev_nvme_set_keys", 00:06:48.128 "bdev_nvme_get_path_iostat", 00:06:48.128 "bdev_nvme_get_mdns_discovery_info", 00:06:48.128 "bdev_nvme_stop_mdns_discovery", 00:06:48.128 "bdev_nvme_start_mdns_discovery", 00:06:48.128 "bdev_nvme_set_multipath_policy", 00:06:48.128 "bdev_nvme_set_preferred_path", 00:06:48.128 "bdev_nvme_get_io_paths", 00:06:48.128 "bdev_nvme_remove_error_injection", 00:06:48.128 "bdev_nvme_add_error_injection", 00:06:48.128 "bdev_nvme_get_discovery_info", 00:06:48.128 "bdev_nvme_stop_discovery", 00:06:48.128 "bdev_nvme_start_discovery", 00:06:48.128 "bdev_nvme_get_controller_health_info", 00:06:48.128 "bdev_nvme_disable_controller", 00:06:48.128 "bdev_nvme_enable_controller", 00:06:48.128 "bdev_nvme_reset_controller", 00:06:48.128 "bdev_nvme_get_transport_statistics", 00:06:48.128 "bdev_nvme_apply_firmware", 00:06:48.128 "bdev_nvme_detach_controller", 00:06:48.128 "bdev_nvme_get_controllers", 00:06:48.128 "bdev_nvme_attach_controller", 00:06:48.128 "bdev_nvme_set_hotplug", 00:06:48.128 "bdev_nvme_set_options", 00:06:48.128 "bdev_passthru_delete", 00:06:48.128 "bdev_passthru_create", 00:06:48.128 "bdev_lvol_set_parent_bdev", 00:06:48.128 "bdev_lvol_set_parent", 00:06:48.128 "bdev_lvol_check_shallow_copy", 00:06:48.128 "bdev_lvol_start_shallow_copy", 00:06:48.128 "bdev_lvol_grow_lvstore", 00:06:48.128 "bdev_lvol_get_lvols", 00:06:48.128 "bdev_lvol_get_lvstores", 00:06:48.128 "bdev_lvol_delete", 00:06:48.128 "bdev_lvol_set_read_only", 00:06:48.128 "bdev_lvol_resize", 00:06:48.128 "bdev_lvol_decouple_parent", 00:06:48.128 "bdev_lvol_inflate", 00:06:48.128 "bdev_lvol_rename", 00:06:48.128 "bdev_lvol_clone_bdev", 00:06:48.128 "bdev_lvol_clone", 00:06:48.128 "bdev_lvol_snapshot", 00:06:48.128 "bdev_lvol_create", 00:06:48.128 "bdev_lvol_delete_lvstore", 00:06:48.128 "bdev_lvol_rename_lvstore", 00:06:48.128 "bdev_lvol_create_lvstore", 00:06:48.128 "bdev_raid_set_options", 00:06:48.128 "bdev_raid_remove_base_bdev", 00:06:48.128 "bdev_raid_add_base_bdev", 00:06:48.128 "bdev_raid_delete", 00:06:48.128 "bdev_raid_create", 00:06:48.128 "bdev_raid_get_bdevs", 00:06:48.128 "bdev_error_inject_error", 00:06:48.128 "bdev_error_delete", 00:06:48.128 "bdev_error_create", 00:06:48.128 "bdev_split_delete", 00:06:48.128 "bdev_split_create", 00:06:48.128 "bdev_delay_delete", 00:06:48.128 "bdev_delay_create", 00:06:48.128 "bdev_delay_update_latency", 00:06:48.128 "bdev_zone_block_delete", 00:06:48.128 "bdev_zone_block_create", 00:06:48.128 "blobfs_create", 00:06:48.128 "blobfs_detect", 00:06:48.128 "blobfs_set_cache_size", 00:06:48.128 "bdev_aio_delete", 00:06:48.128 "bdev_aio_rescan", 00:06:48.128 "bdev_aio_create", 00:06:48.128 "bdev_ftl_set_property", 00:06:48.128 "bdev_ftl_get_properties", 00:06:48.128 "bdev_ftl_get_stats", 00:06:48.128 "bdev_ftl_unmap", 00:06:48.128 "bdev_ftl_unload", 00:06:48.128 "bdev_ftl_delete", 00:06:48.128 "bdev_ftl_load", 00:06:48.128 "bdev_ftl_create", 00:06:48.128 "bdev_virtio_attach_controller", 00:06:48.128 "bdev_virtio_scsi_get_devices", 00:06:48.128 "bdev_virtio_detach_controller", 00:06:48.128 "bdev_virtio_blk_set_hotplug", 00:06:48.128 "bdev_iscsi_delete", 00:06:48.128 "bdev_iscsi_create", 00:06:48.128 "bdev_iscsi_set_options", 00:06:48.128 "accel_error_inject_error", 00:06:48.128 "ioat_scan_accel_module", 00:06:48.128 "dsa_scan_accel_module", 00:06:48.128 "iaa_scan_accel_module", 00:06:48.128 "keyring_file_remove_key", 00:06:48.128 "keyring_file_add_key", 00:06:48.128 "keyring_linux_set_options", 00:06:48.128 "fsdev_aio_delete", 00:06:48.128 "fsdev_aio_create", 00:06:48.128 "iscsi_get_histogram", 00:06:48.128 "iscsi_enable_histogram", 00:06:48.128 "iscsi_set_options", 00:06:48.128 "iscsi_get_auth_groups", 00:06:48.128 "iscsi_auth_group_remove_secret", 00:06:48.128 "iscsi_auth_group_add_secret", 00:06:48.128 "iscsi_delete_auth_group", 00:06:48.128 "iscsi_create_auth_group", 00:06:48.128 "iscsi_set_discovery_auth", 00:06:48.128 "iscsi_get_options", 00:06:48.128 "iscsi_target_node_request_logout", 00:06:48.128 "iscsi_target_node_set_redirect", 00:06:48.128 "iscsi_target_node_set_auth", 00:06:48.128 "iscsi_target_node_add_lun", 00:06:48.128 "iscsi_get_stats", 00:06:48.128 "iscsi_get_connections", 00:06:48.128 "iscsi_portal_group_set_auth", 00:06:48.128 "iscsi_start_portal_group", 00:06:48.128 "iscsi_delete_portal_group", 00:06:48.128 "iscsi_create_portal_group", 00:06:48.128 "iscsi_get_portal_groups", 00:06:48.128 "iscsi_delete_target_node", 00:06:48.128 "iscsi_target_node_remove_pg_ig_maps", 00:06:48.128 "iscsi_target_node_add_pg_ig_maps", 00:06:48.128 "iscsi_create_target_node", 00:06:48.128 "iscsi_get_target_nodes", 00:06:48.128 "iscsi_delete_initiator_group", 00:06:48.128 "iscsi_initiator_group_remove_initiators", 00:06:48.128 "iscsi_initiator_group_add_initiators", 00:06:48.128 "iscsi_create_initiator_group", 00:06:48.128 "iscsi_get_initiator_groups", 00:06:48.128 "nvmf_set_crdt", 00:06:48.128 "nvmf_set_config", 00:06:48.128 "nvmf_set_max_subsystems", 00:06:48.128 "nvmf_stop_mdns_prr", 00:06:48.128 "nvmf_publish_mdns_prr", 00:06:48.128 "nvmf_subsystem_get_listeners", 00:06:48.128 "nvmf_subsystem_get_qpairs", 00:06:48.128 "nvmf_subsystem_get_controllers", 00:06:48.128 "nvmf_get_stats", 00:06:48.128 "nvmf_get_transports", 00:06:48.128 "nvmf_create_transport", 00:06:48.128 "nvmf_get_targets", 00:06:48.128 "nvmf_delete_target", 00:06:48.128 "nvmf_create_target", 00:06:48.128 "nvmf_subsystem_allow_any_host", 00:06:48.129 "nvmf_subsystem_set_keys", 00:06:48.129 "nvmf_subsystem_remove_host", 00:06:48.129 "nvmf_subsystem_add_host", 00:06:48.129 "nvmf_ns_remove_host", 00:06:48.129 "nvmf_ns_add_host", 00:06:48.129 "nvmf_subsystem_remove_ns", 00:06:48.129 "nvmf_subsystem_set_ns_ana_group", 00:06:48.129 "nvmf_subsystem_add_ns", 00:06:48.129 "nvmf_subsystem_listener_set_ana_state", 00:06:48.129 "nvmf_discovery_get_referrals", 00:06:48.129 "nvmf_discovery_remove_referral", 00:06:48.129 "nvmf_discovery_add_referral", 00:06:48.129 "nvmf_subsystem_remove_listener", 00:06:48.129 "nvmf_subsystem_add_listener", 00:06:48.129 "nvmf_delete_subsystem", 00:06:48.129 "nvmf_create_subsystem", 00:06:48.129 "nvmf_get_subsystems", 00:06:48.129 "env_dpdk_get_mem_stats", 00:06:48.129 "nbd_get_disks", 00:06:48.129 "nbd_stop_disk", 00:06:48.129 "nbd_start_disk", 00:06:48.129 "ublk_recover_disk", 00:06:48.129 "ublk_get_disks", 00:06:48.129 "ublk_stop_disk", 00:06:48.129 "ublk_start_disk", 00:06:48.129 "ublk_destroy_target", 00:06:48.129 "ublk_create_target", 00:06:48.129 "virtio_blk_create_transport", 00:06:48.129 "virtio_blk_get_transports", 00:06:48.129 "vhost_controller_set_coalescing", 00:06:48.129 "vhost_get_controllers", 00:06:48.129 "vhost_delete_controller", 00:06:48.129 "vhost_create_blk_controller", 00:06:48.129 "vhost_scsi_controller_remove_target", 00:06:48.129 "vhost_scsi_controller_add_target", 00:06:48.129 "vhost_start_scsi_controller", 00:06:48.129 "vhost_create_scsi_controller", 00:06:48.129 "thread_set_cpumask", 00:06:48.129 "scheduler_set_options", 00:06:48.129 "framework_get_governor", 00:06:48.129 "framework_get_scheduler", 00:06:48.129 "framework_set_scheduler", 00:06:48.129 "framework_get_reactors", 00:06:48.129 "thread_get_io_channels", 00:06:48.129 "thread_get_pollers", 00:06:48.129 "thread_get_stats", 00:06:48.129 "framework_monitor_context_switch", 00:06:48.129 "spdk_kill_instance", 00:06:48.129 "log_enable_timestamps", 00:06:48.129 "log_get_flags", 00:06:48.129 "log_clear_flag", 00:06:48.129 "log_set_flag", 00:06:48.129 "log_get_level", 00:06:48.129 "log_set_level", 00:06:48.129 "log_get_print_level", 00:06:48.129 "log_set_print_level", 00:06:48.129 "framework_enable_cpumask_locks", 00:06:48.129 "framework_disable_cpumask_locks", 00:06:48.129 "framework_wait_init", 00:06:48.129 "framework_start_init", 00:06:48.129 "scsi_get_devices", 00:06:48.129 "bdev_get_histogram", 00:06:48.129 "bdev_enable_histogram", 00:06:48.129 "bdev_set_qos_limit", 00:06:48.129 "bdev_set_qd_sampling_period", 00:06:48.129 "bdev_get_bdevs", 00:06:48.129 "bdev_reset_iostat", 00:06:48.129 "bdev_get_iostat", 00:06:48.129 "bdev_examine", 00:06:48.129 "bdev_wait_for_examine", 00:06:48.129 "bdev_set_options", 00:06:48.129 "accel_get_stats", 00:06:48.129 "accel_set_options", 00:06:48.129 "accel_set_driver", 00:06:48.129 "accel_crypto_key_destroy", 00:06:48.129 "accel_crypto_keys_get", 00:06:48.129 "accel_crypto_key_create", 00:06:48.129 "accel_assign_opc", 00:06:48.129 "accel_get_module_info", 00:06:48.129 "accel_get_opc_assignments", 00:06:48.129 "vmd_rescan", 00:06:48.129 "vmd_remove_device", 00:06:48.129 "vmd_enable", 00:06:48.129 "sock_get_default_impl", 00:06:48.129 "sock_set_default_impl", 00:06:48.129 "sock_impl_set_options", 00:06:48.129 "sock_impl_get_options", 00:06:48.129 "iobuf_get_stats", 00:06:48.129 "iobuf_set_options", 00:06:48.129 "keyring_get_keys", 00:06:48.129 "framework_get_pci_devices", 00:06:48.129 "framework_get_config", 00:06:48.129 "framework_get_subsystems", 00:06:48.129 "fsdev_set_opts", 00:06:48.129 "fsdev_get_opts", 00:06:48.129 "trace_get_info", 00:06:48.129 "trace_get_tpoint_group_mask", 00:06:48.129 "trace_disable_tpoint_group", 00:06:48.129 "trace_enable_tpoint_group", 00:06:48.129 "trace_clear_tpoint_mask", 00:06:48.129 "trace_set_tpoint_mask", 00:06:48.129 "notify_get_notifications", 00:06:48.129 "notify_get_types", 00:06:48.129 "spdk_get_version", 00:06:48.129 "rpc_get_methods" 00:06:48.129 ] 00:06:48.129 20:19:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.129 20:19:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:48.129 20:19:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70191 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70191 ']' 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70191 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70191 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70191' 00:06:48.129 killing process with pid 70191 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70191 00:06:48.129 20:19:41 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70191 00:06:48.696 00:06:48.696 real 0m2.099s 00:06:48.696 user 0m3.396s 00:06:48.696 sys 0m0.699s 00:06:48.696 20:19:42 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.696 20:19:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.696 ************************************ 00:06:48.696 END TEST spdkcli_tcp 00:06:48.696 ************************************ 00:06:48.696 20:19:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:48.696 20:19:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.696 20:19:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.696 20:19:42 -- common/autotest_common.sh@10 -- # set +x 00:06:48.696 ************************************ 00:06:48.696 START TEST dpdk_mem_utility 00:06:48.696 ************************************ 00:06:48.696 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:48.954 * Looking for test storage... 00:06:48.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:48.954 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:48.954 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:48.954 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:48.954 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.954 20:19:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:48.954 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.954 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:48.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.954 --rc genhtml_branch_coverage=1 00:06:48.954 --rc genhtml_function_coverage=1 00:06:48.954 --rc genhtml_legend=1 00:06:48.954 --rc geninfo_all_blocks=1 00:06:48.954 --rc geninfo_unexecuted_blocks=1 00:06:48.954 00:06:48.954 ' 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:48.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.955 --rc genhtml_branch_coverage=1 00:06:48.955 --rc genhtml_function_coverage=1 00:06:48.955 --rc genhtml_legend=1 00:06:48.955 --rc geninfo_all_blocks=1 00:06:48.955 --rc geninfo_unexecuted_blocks=1 00:06:48.955 00:06:48.955 ' 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:48.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.955 --rc genhtml_branch_coverage=1 00:06:48.955 --rc genhtml_function_coverage=1 00:06:48.955 --rc genhtml_legend=1 00:06:48.955 --rc geninfo_all_blocks=1 00:06:48.955 --rc geninfo_unexecuted_blocks=1 00:06:48.955 00:06:48.955 ' 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:48.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.955 --rc genhtml_branch_coverage=1 00:06:48.955 --rc genhtml_function_coverage=1 00:06:48.955 --rc genhtml_legend=1 00:06:48.955 --rc geninfo_all_blocks=1 00:06:48.955 --rc geninfo_unexecuted_blocks=1 00:06:48.955 00:06:48.955 ' 00:06:48.955 20:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:48.955 20:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70291 00:06:48.955 20:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.955 20:19:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70291 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70291 ']' 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.955 20:19:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.212 [2024-11-26 20:19:42.525529] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.212 [2024-11-26 20:19:42.525699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70291 ] 00:06:49.212 [2024-11-26 20:19:42.692246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.470 [2024-11-26 20:19:42.773878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.039 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.039 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:50.040 20:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:50.040 20:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:50.040 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.040 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.040 { 00:06:50.040 "filename": "/tmp/spdk_mem_dump.txt" 00:06:50.040 } 00:06:50.040 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.040 20:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:50.040 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:50.040 1 heaps totaling size 860.000000 MiB 00:06:50.040 size: 860.000000 MiB heap id: 0 00:06:50.040 end heaps---------- 00:06:50.040 9 mempools totaling size 642.649841 MiB 00:06:50.040 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:50.040 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:50.040 size: 92.545471 MiB name: bdev_io_70291 00:06:50.040 size: 51.011292 MiB name: evtpool_70291 00:06:50.040 size: 50.003479 MiB name: msgpool_70291 00:06:50.040 size: 36.509338 MiB name: fsdev_io_70291 00:06:50.040 size: 21.763794 MiB name: PDU_Pool 00:06:50.040 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:50.040 size: 0.026123 MiB name: Session_Pool 00:06:50.040 end mempools------- 00:06:50.040 6 memzones totaling size 4.142822 MiB 00:06:50.040 size: 1.000366 MiB name: RG_ring_0_70291 00:06:50.040 size: 1.000366 MiB name: RG_ring_1_70291 00:06:50.040 size: 1.000366 MiB name: RG_ring_4_70291 00:06:50.040 size: 1.000366 MiB name: RG_ring_5_70291 00:06:50.040 size: 0.125366 MiB name: RG_ring_2_70291 00:06:50.040 size: 0.015991 MiB name: RG_ring_3_70291 00:06:50.040 end memzones------- 00:06:50.040 20:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:50.040 heap id: 0 total size: 860.000000 MiB number of busy elements: 303 number of free elements: 16 00:06:50.040 list of free elements. size: 13.937256 MiB 00:06:50.040 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:50.040 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:50.040 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:50.040 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:50.040 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:50.040 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:50.040 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:50.040 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:50.040 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:50.040 element at address: 0x20001d800000 with size: 0.568420 MiB 00:06:50.040 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:50.040 element at address: 0x200003e00000 with size: 0.488464 MiB 00:06:50.040 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:50.040 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:50.040 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:50.040 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:50.040 list of standard malloc elements. size: 199.266052 MiB 00:06:50.040 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:50.040 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:50.040 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:50.040 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:50.040 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:50.040 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:50.040 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:50.040 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:50.040 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:50.040 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:50.040 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:50.041 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:50.041 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:50.042 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:50.042 list of memzone associated elements. size: 646.796692 MiB 00:06:50.042 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:50.042 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:50.042 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:50.042 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:50.042 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:50.042 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70291_0 00:06:50.042 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:50.042 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70291_0 00:06:50.042 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:50.042 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70291_0 00:06:50.042 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:50.042 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70291_0 00:06:50.042 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:50.042 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:50.042 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:50.042 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:50.042 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:50.042 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70291 00:06:50.042 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:50.042 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70291 00:06:50.042 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:50.042 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70291 00:06:50.042 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:50.042 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:50.042 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:50.042 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:50.042 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:50.042 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:50.042 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:50.042 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:50.042 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:50.042 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70291 00:06:50.042 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:50.042 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70291 00:06:50.042 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:50.042 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70291 00:06:50.042 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:50.042 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70291 00:06:50.042 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:50.042 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70291 00:06:50.042 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:50.042 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70291 00:06:50.042 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:50.042 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:50.042 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:50.042 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:50.042 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:50.042 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:50.042 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:50.042 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70291 00:06:50.042 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:50.042 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:50.042 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:50.042 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:50.042 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:50.042 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70291 00:06:50.042 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:50.042 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:50.042 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:50.042 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70291 00:06:50.042 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:50.042 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70291 00:06:50.042 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:50.042 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70291 00:06:50.042 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:50.042 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:50.042 20:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:50.042 20:19:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70291 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70291 ']' 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70291 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70291 00:06:50.042 killing process with pid 70291 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70291' 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70291 00:06:50.042 20:19:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70291 00:06:50.980 00:06:50.980 real 0m1.971s 00:06:50.980 user 0m1.868s 00:06:50.980 sys 0m0.630s 00:06:50.980 20:19:44 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.980 ************************************ 00:06:50.980 END TEST dpdk_mem_utility 00:06:50.980 ************************************ 00:06:50.980 20:19:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.980 20:19:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:50.980 20:19:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.980 20:19:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.980 20:19:44 -- common/autotest_common.sh@10 -- # set +x 00:06:50.980 ************************************ 00:06:50.980 START TEST event 00:06:50.980 ************************************ 00:06:50.980 20:19:44 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:50.980 * Looking for test storage... 00:06:50.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:50.980 20:19:44 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:50.981 20:19:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.981 20:19:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.981 20:19:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.981 20:19:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.981 20:19:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.981 20:19:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.981 20:19:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.981 20:19:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.981 20:19:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.981 20:19:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.981 20:19:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.981 20:19:44 event -- scripts/common.sh@344 -- # case "$op" in 00:06:50.981 20:19:44 event -- scripts/common.sh@345 -- # : 1 00:06:50.981 20:19:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.981 20:19:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.981 20:19:44 event -- scripts/common.sh@365 -- # decimal 1 00:06:50.981 20:19:44 event -- scripts/common.sh@353 -- # local d=1 00:06:50.981 20:19:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.981 20:19:44 event -- scripts/common.sh@355 -- # echo 1 00:06:50.981 20:19:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.981 20:19:44 event -- scripts/common.sh@366 -- # decimal 2 00:06:50.981 20:19:44 event -- scripts/common.sh@353 -- # local d=2 00:06:50.981 20:19:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.981 20:19:44 event -- scripts/common.sh@355 -- # echo 2 00:06:50.981 20:19:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.981 20:19:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.981 20:19:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.981 20:19:44 event -- scripts/common.sh@368 -- # return 0 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:50.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.981 --rc genhtml_branch_coverage=1 00:06:50.981 --rc genhtml_function_coverage=1 00:06:50.981 --rc genhtml_legend=1 00:06:50.981 --rc geninfo_all_blocks=1 00:06:50.981 --rc geninfo_unexecuted_blocks=1 00:06:50.981 00:06:50.981 ' 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:50.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.981 --rc genhtml_branch_coverage=1 00:06:50.981 --rc genhtml_function_coverage=1 00:06:50.981 --rc genhtml_legend=1 00:06:50.981 --rc geninfo_all_blocks=1 00:06:50.981 --rc geninfo_unexecuted_blocks=1 00:06:50.981 00:06:50.981 ' 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:50.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.981 --rc genhtml_branch_coverage=1 00:06:50.981 --rc genhtml_function_coverage=1 00:06:50.981 --rc genhtml_legend=1 00:06:50.981 --rc geninfo_all_blocks=1 00:06:50.981 --rc geninfo_unexecuted_blocks=1 00:06:50.981 00:06:50.981 ' 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:50.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.981 --rc genhtml_branch_coverage=1 00:06:50.981 --rc genhtml_function_coverage=1 00:06:50.981 --rc genhtml_legend=1 00:06:50.981 --rc geninfo_all_blocks=1 00:06:50.981 --rc geninfo_unexecuted_blocks=1 00:06:50.981 00:06:50.981 ' 00:06:50.981 20:19:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:50.981 20:19:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.981 20:19:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:50.981 20:19:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.981 20:19:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.981 ************************************ 00:06:50.981 START TEST event_perf 00:06:50.981 ************************************ 00:06:50.981 20:19:44 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:50.981 Running I/O for 1 seconds...[2024-11-26 20:19:44.497984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:50.981 [2024-11-26 20:19:44.498218] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70377 ] 00:06:51.240 [2024-11-26 20:19:44.651185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.240 [2024-11-26 20:19:44.735983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.240 [2024-11-26 20:19:44.736115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.240 [2024-11-26 20:19:44.736167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.240 Running I/O for 1 seconds...[2024-11-26 20:19:44.736219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.621 00:06:52.621 lcore 0: 174849 00:06:52.621 lcore 1: 174849 00:06:52.621 lcore 2: 174849 00:06:52.621 lcore 3: 174850 00:06:52.621 done. 00:06:52.621 00:06:52.621 real 0m1.434s 00:06:52.621 user 0m4.172s 00:06:52.621 sys 0m0.137s 00:06:52.621 20:19:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.621 20:19:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.621 ************************************ 00:06:52.621 END TEST event_perf 00:06:52.621 ************************************ 00:06:52.621 20:19:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:52.621 20:19:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:52.621 20:19:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.621 20:19:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.621 ************************************ 00:06:52.621 START TEST event_reactor 00:06:52.621 ************************************ 00:06:52.621 20:19:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:52.621 [2024-11-26 20:19:45.984138] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:52.621 [2024-11-26 20:19:45.984530] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70411 ] 00:06:52.622 [2024-11-26 20:19:46.165984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.882 [2024-11-26 20:19:46.249735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.902 test_start 00:06:53.902 oneshot 00:06:53.902 tick 100 00:06:53.902 tick 100 00:06:53.902 tick 250 00:06:53.902 tick 100 00:06:53.902 tick 100 00:06:53.902 tick 100 00:06:53.902 tick 250 00:06:53.902 tick 500 00:06:53.902 tick 100 00:06:53.902 tick 100 00:06:53.902 tick 250 00:06:53.902 tick 100 00:06:53.902 tick 100 00:06:53.902 test_end 00:06:53.902 ************************************ 00:06:53.902 00:06:53.902 real 0m1.445s 00:06:53.902 user 0m1.215s 00:06:53.902 sys 0m0.121s 00:06:53.902 20:19:47 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.902 20:19:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:53.902 END TEST event_reactor 00:06:53.902 ************************************ 00:06:53.902 20:19:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.902 20:19:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:53.902 20:19:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.902 20:19:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.162 ************************************ 00:06:54.162 START TEST event_reactor_perf 00:06:54.162 ************************************ 00:06:54.162 20:19:47 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.162 [2024-11-26 20:19:47.495573] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.162 [2024-11-26 20:19:47.495851] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70453 ] 00:06:54.162 [2024-11-26 20:19:47.655592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.421 [2024-11-26 20:19:47.738203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.359 test_start 00:06:55.359 test_end 00:06:55.359 Performance: 338572 events per second 00:06:55.359 00:06:55.359 real 0m1.424s 00:06:55.359 user 0m1.201s 00:06:55.359 sys 0m0.115s 00:06:55.359 ************************************ 00:06:55.359 END TEST event_reactor_perf 00:06:55.359 ************************************ 00:06:55.359 20:19:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.359 20:19:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.618 20:19:48 event -- event/event.sh@49 -- # uname -s 00:06:55.618 20:19:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:55.618 20:19:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:55.618 20:19:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.618 20:19:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.618 20:19:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.618 ************************************ 00:06:55.618 START TEST event_scheduler 00:06:55.618 ************************************ 00:06:55.618 20:19:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:55.618 * Looking for test storage... 00:06:55.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:55.618 20:19:49 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.618 20:19:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.618 20:19:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.618 20:19:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:55.618 20:19:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.878 20:19:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:55.878 20:19:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:55.878 20:19:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.878 20:19:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:55.879 20:19:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.879 20:19:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.879 20:19:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.879 20:19:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.879 --rc genhtml_branch_coverage=1 00:06:55.879 --rc genhtml_function_coverage=1 00:06:55.879 --rc genhtml_legend=1 00:06:55.879 --rc geninfo_all_blocks=1 00:06:55.879 --rc geninfo_unexecuted_blocks=1 00:06:55.879 00:06:55.879 ' 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.879 --rc genhtml_branch_coverage=1 00:06:55.879 --rc genhtml_function_coverage=1 00:06:55.879 --rc genhtml_legend=1 00:06:55.879 --rc geninfo_all_blocks=1 00:06:55.879 --rc geninfo_unexecuted_blocks=1 00:06:55.879 00:06:55.879 ' 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.879 --rc genhtml_branch_coverage=1 00:06:55.879 --rc genhtml_function_coverage=1 00:06:55.879 --rc genhtml_legend=1 00:06:55.879 --rc geninfo_all_blocks=1 00:06:55.879 --rc geninfo_unexecuted_blocks=1 00:06:55.879 00:06:55.879 ' 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.879 --rc genhtml_branch_coverage=1 00:06:55.879 --rc genhtml_function_coverage=1 00:06:55.879 --rc genhtml_legend=1 00:06:55.879 --rc geninfo_all_blocks=1 00:06:55.879 --rc geninfo_unexecuted_blocks=1 00:06:55.879 00:06:55.879 ' 00:06:55.879 20:19:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:55.879 20:19:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70524 00:06:55.879 20:19:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:55.879 20:19:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.879 20:19:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70524 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70524 ']' 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.879 20:19:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.879 [2024-11-26 20:19:49.263129] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.879 [2024-11-26 20:19:49.263286] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70524 ] 00:06:55.879 [2024-11-26 20:19:49.427171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.139 [2024-11-26 20:19:49.513485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.139 [2024-11-26 20:19:49.513779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.139 [2024-11-26 20:19:49.513732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.139 [2024-11-26 20:19:49.513909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.706 20:19:50 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.706 20:19:50 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:56.706 20:19:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:56.706 20:19:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.706 20:19:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.706 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.706 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.706 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.706 POWER: Cannot set governor of lcore 0 to performance 00:06:56.706 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.706 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.706 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.706 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.706 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:56.706 POWER: Unable to set Power Management Environment for lcore 0 00:06:56.706 [2024-11-26 20:19:50.166548] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:56.706 [2024-11-26 20:19:50.166574] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:56.706 [2024-11-26 20:19:50.166591] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:56.706 [2024-11-26 20:19:50.166666] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:56.706 [2024-11-26 20:19:50.166680] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:56.706 [2024-11-26 20:19:50.166691] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:56.706 20:19:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.706 20:19:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:56.706 20:19:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.706 20:19:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 [2024-11-26 20:19:50.259297] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:56.966 20:19:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:56.966 20:19:50 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.966 20:19:50 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 ************************************ 00:06:56.966 START TEST scheduler_create_thread 00:06:56.966 ************************************ 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 2 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 3 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 4 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 5 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 6 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 7 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 8 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 9 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 10 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.533 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.533 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:57.533 20:19:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:57.533 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.533 20:19:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.470 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.470 20:19:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:58.470 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.470 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.407 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.407 20:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:59.407 20:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:59.407 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.407 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.990 ************************************ 00:06:59.990 END TEST scheduler_create_thread 00:06:59.990 ************************************ 00:06:59.990 20:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.990 00:06:59.990 real 0m3.216s 00:06:59.990 user 0m0.032s 00:06:59.990 sys 0m0.001s 00:06:59.990 20:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.990 20:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.275 20:19:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:00.275 20:19:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70524 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70524 ']' 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70524 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70524 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:00.275 killing process with pid 70524 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70524' 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70524 00:07:00.275 20:19:53 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70524 00:07:00.535 [2024-11-26 20:19:53.866868] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:00.794 00:07:00.794 real 0m5.303s 00:07:00.794 user 0m10.458s 00:07:00.794 sys 0m0.545s 00:07:00.794 20:19:54 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.794 20:19:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.794 ************************************ 00:07:00.794 END TEST event_scheduler 00:07:00.794 ************************************ 00:07:00.794 20:19:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:00.794 20:19:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:00.794 20:19:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.794 20:19:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.794 20:19:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.794 ************************************ 00:07:00.794 START TEST app_repeat 00:07:00.794 ************************************ 00:07:00.794 20:19:54 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:00.794 20:19:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70630 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70630' 00:07:00.795 Process app_repeat pid: 70630 00:07:00.795 spdk_app_start Round 0 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:00.795 20:19:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70630 /var/tmp/spdk-nbd.sock 00:07:00.795 20:19:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70630 ']' 00:07:00.795 20:19:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.795 20:19:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.795 20:19:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.795 20:19:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.795 20:19:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.053 [2024-11-26 20:19:54.373865] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.053 [2024-11-26 20:19:54.374007] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70630 ] 00:07:01.053 [2024-11-26 20:19:54.532342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.311 [2024-11-26 20:19:54.613525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.311 [2024-11-26 20:19:54.613689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.879 20:19:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.879 20:19:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:01.879 20:19:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.138 Malloc0 00:07:02.138 20:19:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.398 Malloc1 00:07:02.398 20:19:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.398 20:19:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.657 /dev/nbd0 00:07:02.657 20:19:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.657 20:19:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.657 1+0 records in 00:07:02.657 1+0 records out 00:07:02.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428958 s, 9.5 MB/s 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.657 20:19:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:02.657 20:19:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.657 20:19:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.657 20:19:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.916 /dev/nbd1 00:07:03.177 20:19:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:03.177 20:19:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.177 1+0 records in 00:07:03.177 1+0 records out 00:07:03.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258168 s, 15.9 MB/s 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.177 20:19:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:03.177 20:19:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.177 20:19:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.177 20:19:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.177 20:19:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.177 20:19:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.437 { 00:07:03.437 "nbd_device": "/dev/nbd0", 00:07:03.437 "bdev_name": "Malloc0" 00:07:03.437 }, 00:07:03.437 { 00:07:03.437 "nbd_device": "/dev/nbd1", 00:07:03.437 "bdev_name": "Malloc1" 00:07:03.437 } 00:07:03.437 ]' 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.437 { 00:07:03.437 "nbd_device": "/dev/nbd0", 00:07:03.437 "bdev_name": "Malloc0" 00:07:03.437 }, 00:07:03.437 { 00:07:03.437 "nbd_device": "/dev/nbd1", 00:07:03.437 "bdev_name": "Malloc1" 00:07:03.437 } 00:07:03.437 ]' 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.437 /dev/nbd1' 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.437 /dev/nbd1' 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:03.437 256+0 records in 00:07:03.437 256+0 records out 00:07:03.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137627 s, 76.2 MB/s 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.437 256+0 records in 00:07:03.437 256+0 records out 00:07:03.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187927 s, 55.8 MB/s 00:07:03.437 20:19:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.438 256+0 records in 00:07:03.438 256+0 records out 00:07:03.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235787 s, 44.5 MB/s 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.438 20:19:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.698 20:19:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.956 20:19:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.215 20:19:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.215 20:19:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:04.783 20:19:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.783 [2024-11-26 20:19:58.281539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.043 [2024-11-26 20:19:58.366000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.043 [2024-11-26 20:19:58.366003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.043 [2024-11-26 20:19:58.421960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.043 [2024-11-26 20:19:58.422037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:07.585 spdk_app_start Round 1 00:07:07.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.585 20:20:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:07.585 20:20:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:07.585 20:20:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70630 /var/tmp/spdk-nbd.sock 00:07:07.585 20:20:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70630 ']' 00:07:07.585 20:20:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.585 20:20:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.585 20:20:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.585 20:20:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.585 20:20:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.845 20:20:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.845 20:20:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:07.845 20:20:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.104 Malloc0 00:07:08.104 20:20:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.363 Malloc1 00:07:08.363 20:20:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.363 20:20:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:08.622 /dev/nbd0 00:07:08.622 20:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.622 20:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.622 20:20:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.623 1+0 records in 00:07:08.623 1+0 records out 00:07:08.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295544 s, 13.9 MB/s 00:07:08.623 20:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.881 20:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:08.881 20:20:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:08.881 20:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.881 20:20:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:08.881 20:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.881 20:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.881 20:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:08.881 /dev/nbd1 00:07:09.141 20:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.141 20:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.141 1+0 records in 00:07:09.141 1+0 records out 00:07:09.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288015 s, 14.2 MB/s 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.141 20:20:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:09.141 20:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.141 20:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.141 20:20:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.141 20:20:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.141 20:20:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.460 { 00:07:09.460 "nbd_device": "/dev/nbd0", 00:07:09.460 "bdev_name": "Malloc0" 00:07:09.460 }, 00:07:09.460 { 00:07:09.460 "nbd_device": "/dev/nbd1", 00:07:09.460 "bdev_name": "Malloc1" 00:07:09.460 } 00:07:09.460 ]' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.460 { 00:07:09.460 "nbd_device": "/dev/nbd0", 00:07:09.460 "bdev_name": "Malloc0" 00:07:09.460 }, 00:07:09.460 { 00:07:09.460 "nbd_device": "/dev/nbd1", 00:07:09.460 "bdev_name": "Malloc1" 00:07:09.460 } 00:07:09.460 ]' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.460 /dev/nbd1' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.460 /dev/nbd1' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:09.460 256+0 records in 00:07:09.460 256+0 records out 00:07:09.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655799 s, 160 MB/s 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.460 256+0 records in 00:07:09.460 256+0 records out 00:07:09.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019233 s, 54.5 MB/s 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.460 256+0 records in 00:07:09.460 256+0 records out 00:07:09.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247769 s, 42.3 MB/s 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.460 20:20:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.746 20:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.006 20:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:10.266 20:20:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:10.266 20:20:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:10.524 20:20:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:10.784 [2024-11-26 20:20:04.207260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.784 [2024-11-26 20:20:04.288892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.784 [2024-11-26 20:20:04.288924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.043 [2024-11-26 20:20:04.342049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:11.043 [2024-11-26 20:20:04.342208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:13.582 spdk_app_start Round 2 00:07:13.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.582 20:20:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:13.582 20:20:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:13.582 20:20:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70630 /var/tmp/spdk-nbd.sock 00:07:13.582 20:20:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70630 ']' 00:07:13.582 20:20:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.582 20:20:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.582 20:20:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.582 20:20:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.582 20:20:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.849 20:20:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.849 20:20:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:13.849 20:20:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.117 Malloc0 00:07:14.117 20:20:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:14.377 Malloc1 00:07:14.377 20:20:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.377 20:20:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:14.638 /dev/nbd0 00:07:14.638 20:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:14.638 20:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.638 1+0 records in 00:07:14.638 1+0 records out 00:07:14.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348769 s, 11.7 MB/s 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.638 20:20:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:14.638 20:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.638 20:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.638 20:20:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:14.897 /dev/nbd1 00:07:14.897 20:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:14.897 20:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:14.897 1+0 records in 00:07:14.897 1+0 records out 00:07:14.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224799 s, 18.2 MB/s 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.897 20:20:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:14.897 20:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.897 20:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:14.897 20:20:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.897 20:20:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.897 20:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.157 { 00:07:15.157 "nbd_device": "/dev/nbd0", 00:07:15.157 "bdev_name": "Malloc0" 00:07:15.157 }, 00:07:15.157 { 00:07:15.157 "nbd_device": "/dev/nbd1", 00:07:15.157 "bdev_name": "Malloc1" 00:07:15.157 } 00:07:15.157 ]' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.157 { 00:07:15.157 "nbd_device": "/dev/nbd0", 00:07:15.157 "bdev_name": "Malloc0" 00:07:15.157 }, 00:07:15.157 { 00:07:15.157 "nbd_device": "/dev/nbd1", 00:07:15.157 "bdev_name": "Malloc1" 00:07:15.157 } 00:07:15.157 ]' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.157 /dev/nbd1' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.157 /dev/nbd1' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:15.157 256+0 records in 00:07:15.157 256+0 records out 00:07:15.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00729276 s, 144 MB/s 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.157 256+0 records in 00:07:15.157 256+0 records out 00:07:15.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229968 s, 45.6 MB/s 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.157 256+0 records in 00:07:15.157 256+0 records out 00:07:15.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022942 s, 45.7 MB/s 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.157 20:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.416 20:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.675 20:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.934 20:20:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.934 20:20:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:16.502 20:20:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:16.502 [2024-11-26 20:20:09.977349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.762 [2024-11-26 20:20:10.059482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.762 [2024-11-26 20:20:10.059489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.762 [2024-11-26 20:20:10.115023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:16.762 [2024-11-26 20:20:10.115096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:19.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:19.303 20:20:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70630 /var/tmp/spdk-nbd.sock 00:07:19.303 20:20:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70630 ']' 00:07:19.303 20:20:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:19.303 20:20:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.303 20:20:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:19.303 20:20:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.303 20:20:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:19.563 20:20:13 event.app_repeat -- event/event.sh@39 -- # killprocess 70630 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70630 ']' 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70630 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70630 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70630' 00:07:19.563 killing process with pid 70630 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70630 00:07:19.563 20:20:13 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70630 00:07:19.823 spdk_app_start is called in Round 0. 00:07:19.823 Shutdown signal received, stop current app iteration 00:07:19.823 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:19.823 spdk_app_start is called in Round 1. 00:07:19.823 Shutdown signal received, stop current app iteration 00:07:19.823 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:19.823 spdk_app_start is called in Round 2. 00:07:19.823 Shutdown signal received, stop current app iteration 00:07:19.823 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:19.823 spdk_app_start is called in Round 3. 00:07:19.823 Shutdown signal received, stop current app iteration 00:07:19.823 20:20:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:19.823 20:20:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:19.823 00:07:19.823 real 0m19.043s 00:07:19.823 user 0m42.003s 00:07:19.823 sys 0m3.231s 00:07:19.823 20:20:13 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.823 20:20:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:19.823 ************************************ 00:07:19.823 END TEST app_repeat 00:07:19.823 ************************************ 00:07:20.084 20:20:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:20.084 20:20:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:20.084 20:20:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.084 20:20:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.084 20:20:13 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.084 ************************************ 00:07:20.084 START TEST cpu_locks 00:07:20.084 ************************************ 00:07:20.084 20:20:13 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:20.084 * Looking for test storage... 00:07:20.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:20.084 20:20:13 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:20.084 20:20:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:20.084 20:20:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:20.345 20:20:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.345 20:20:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:20.345 20:20:13 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.345 20:20:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:20.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.345 --rc genhtml_branch_coverage=1 00:07:20.345 --rc genhtml_function_coverage=1 00:07:20.345 --rc genhtml_legend=1 00:07:20.345 --rc geninfo_all_blocks=1 00:07:20.345 --rc geninfo_unexecuted_blocks=1 00:07:20.345 00:07:20.346 ' 00:07:20.346 20:20:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:20.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.346 --rc genhtml_branch_coverage=1 00:07:20.346 --rc genhtml_function_coverage=1 00:07:20.346 --rc genhtml_legend=1 00:07:20.346 --rc geninfo_all_blocks=1 00:07:20.346 --rc geninfo_unexecuted_blocks=1 00:07:20.346 00:07:20.346 ' 00:07:20.346 20:20:13 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:20.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.346 --rc genhtml_branch_coverage=1 00:07:20.346 --rc genhtml_function_coverage=1 00:07:20.346 --rc genhtml_legend=1 00:07:20.346 --rc geninfo_all_blocks=1 00:07:20.346 --rc geninfo_unexecuted_blocks=1 00:07:20.346 00:07:20.346 ' 00:07:20.346 20:20:13 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:20.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.346 --rc genhtml_branch_coverage=1 00:07:20.346 --rc genhtml_function_coverage=1 00:07:20.346 --rc genhtml_legend=1 00:07:20.346 --rc geninfo_all_blocks=1 00:07:20.346 --rc geninfo_unexecuted_blocks=1 00:07:20.346 00:07:20.346 ' 00:07:20.346 20:20:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:20.346 20:20:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:20.346 20:20:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:20.346 20:20:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:20.346 20:20:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.346 20:20:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.346 20:20:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.346 ************************************ 00:07:20.346 START TEST default_locks 00:07:20.346 ************************************ 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71076 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71076 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71076 ']' 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.346 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.346 [2024-11-26 20:20:13.784939] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:20.346 [2024-11-26 20:20:13.785201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71076 ] 00:07:20.606 [2024-11-26 20:20:13.935123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.606 [2024-11-26 20:20:14.019421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.176 20:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.176 20:20:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:21.176 20:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71076 00:07:21.176 20:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71076 00:07:21.176 20:20:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71076 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71076 ']' 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71076 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71076 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71076' 00:07:21.744 killing process with pid 71076 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71076 00:07:21.744 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71076 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71076 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71076 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71076 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71076 ']' 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 ERROR: process (pid: 71076) is no longer running 00:07:22.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71076) - No such process 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:22.313 00:07:22.313 real 0m2.031s 00:07:22.313 user 0m1.948s 00:07:22.313 sys 0m0.720s 00:07:22.313 ************************************ 00:07:22.313 END TEST default_locks 00:07:22.313 ************************************ 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.313 20:20:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 20:20:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:22.313 20:20:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.313 20:20:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.313 20:20:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 ************************************ 00:07:22.313 START TEST default_locks_via_rpc 00:07:22.313 ************************************ 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71131 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71131 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71131 ']' 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.313 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.572 [2024-11-26 20:20:15.890516] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:22.572 [2024-11-26 20:20:15.890840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71131 ] 00:07:22.572 [2024-11-26 20:20:16.056343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.832 [2024-11-26 20:20:16.140909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71131 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71131 00:07:23.437 20:20:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.697 20:20:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71131 00:07:23.697 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71131 ']' 00:07:23.697 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71131 00:07:23.697 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:23.698 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.698 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71131 00:07:23.958 killing process with pid 71131 00:07:23.958 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.958 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.958 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71131' 00:07:23.958 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71131 00:07:23.958 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71131 00:07:24.528 00:07:24.528 real 0m2.043s 00:07:24.528 user 0m1.977s 00:07:24.528 sys 0m0.735s 00:07:24.528 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.528 20:20:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.528 ************************************ 00:07:24.528 END TEST default_locks_via_rpc 00:07:24.528 ************************************ 00:07:24.528 20:20:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:24.528 20:20:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.528 20:20:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.528 20:20:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.528 ************************************ 00:07:24.528 START TEST non_locking_app_on_locked_coremask 00:07:24.528 ************************************ 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71183 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71183 /var/tmp/spdk.sock 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71183 ']' 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.528 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.528 [2024-11-26 20:20:17.988858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:24.528 [2024-11-26 20:20:17.989324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71183 ] 00:07:24.788 [2024-11-26 20:20:18.139580] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.788 [2024-11-26 20:20:18.243585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71194 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71194 /var/tmp/spdk2.sock 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71194 ']' 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.357 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.357 [2024-11-26 20:20:18.897048] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.357 [2024-11-26 20:20:18.897185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71194 ] 00:07:25.632 [2024-11-26 20:20:19.053854] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.632 [2024-11-26 20:20:19.053936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.897 [2024-11-26 20:20:19.212686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.465 20:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.465 20:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:26.465 20:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71183 00:07:26.465 20:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71183 00:07:26.465 20:20:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71183 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71183 ']' 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71183 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71183 00:07:27.035 killing process with pid 71183 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71183' 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71183 00:07:27.035 20:20:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71183 00:07:27.974 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71194 00:07:27.974 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71194 ']' 00:07:27.974 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71194 00:07:27.974 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:27.974 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.974 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71194 00:07:28.234 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.234 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.234 killing process with pid 71194 00:07:28.234 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71194' 00:07:28.234 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71194 00:07:28.234 20:20:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71194 00:07:28.854 ************************************ 00:07:28.854 END TEST non_locking_app_on_locked_coremask 00:07:28.854 ************************************ 00:07:28.854 00:07:28.854 real 0m4.185s 00:07:28.854 user 0m4.195s 00:07:28.854 sys 0m1.401s 00:07:28.854 20:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.854 20:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.854 20:20:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:28.854 20:20:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.854 20:20:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.854 20:20:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.854 ************************************ 00:07:28.854 START TEST locking_app_on_unlocked_coremask 00:07:28.854 ************************************ 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71274 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71274 /var/tmp/spdk.sock 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71274 ']' 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.854 20:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.854 [2024-11-26 20:20:22.240760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:28.854 [2024-11-26 20:20:22.240936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:07:28.854 [2024-11-26 20:20:22.398530] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.854 [2024-11-26 20:20:22.398628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.114 [2024-11-26 20:20:22.453365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.681 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.681 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:29.681 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71289 00:07:29.682 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:29.682 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71289 /var/tmp/spdk2.sock 00:07:29.682 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71289 ']' 00:07:29.682 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.682 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.682 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.682 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.682 20:20:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.682 [2024-11-26 20:20:23.207763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:29.682 [2024-11-26 20:20:23.207912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71289 ] 00:07:29.941 [2024-11-26 20:20:23.363591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.199 [2024-11-26 20:20:23.528869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.765 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.765 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:30.765 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71289 00:07:30.765 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71289 00:07:30.765 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.024 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71274 00:07:31.024 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71274 ']' 00:07:31.024 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71274 00:07:31.024 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:31.024 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.024 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71274 00:07:31.282 killing process with pid 71274 00:07:31.282 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.282 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.282 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71274' 00:07:31.282 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71274 00:07:31.282 20:20:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71274 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71289 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71289 ']' 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71289 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71289 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.216 killing process with pid 71289 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71289' 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71289 00:07:32.216 20:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71289 00:07:32.783 00:07:32.783 real 0m4.173s 00:07:32.783 user 0m4.222s 00:07:32.783 sys 0m1.292s 00:07:32.783 20:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.783 20:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.783 ************************************ 00:07:32.783 END TEST locking_app_on_unlocked_coremask 00:07:32.783 ************************************ 00:07:33.043 20:20:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:33.043 20:20:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.043 20:20:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.043 20:20:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.043 ************************************ 00:07:33.043 START TEST locking_app_on_locked_coremask 00:07:33.043 ************************************ 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71359 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71359 /var/tmp/spdk.sock 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71359 ']' 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.043 20:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.043 [2024-11-26 20:20:26.458387] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.043 [2024-11-26 20:20:26.458522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71359 ] 00:07:33.302 [2024-11-26 20:20:26.617282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.302 [2024-11-26 20:20:26.697423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71374 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71374 /var/tmp/spdk2.sock 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71374 /var/tmp/spdk2.sock 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71374 /var/tmp/spdk2.sock 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71374 ']' 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.871 20:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.871 [2024-11-26 20:20:27.415885] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.871 [2024-11-26 20:20:27.416016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71374 ] 00:07:34.131 [2024-11-26 20:20:27.566252] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71359 has claimed it. 00:07:34.131 [2024-11-26 20:20:27.566328] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:34.706 ERROR: process (pid: 71374) is no longer running 00:07:34.706 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71374) - No such process 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71359 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71359 00:07:34.706 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71359 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71359 ']' 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71359 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71359 00:07:34.965 killing process with pid 71359 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71359' 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71359 00:07:34.965 20:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71359 00:07:35.531 00:07:35.531 real 0m2.719s 00:07:35.531 user 0m2.843s 00:07:35.531 sys 0m0.866s 00:07:35.531 20:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.531 20:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.531 ************************************ 00:07:35.531 END TEST locking_app_on_locked_coremask 00:07:35.531 ************************************ 00:07:35.790 20:20:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:35.790 20:20:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.790 20:20:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.790 20:20:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.790 ************************************ 00:07:35.790 START TEST locking_overlapped_coremask 00:07:35.790 ************************************ 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71417 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71417 /var/tmp/spdk.sock 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71417 ']' 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.790 20:20:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.790 [2024-11-26 20:20:29.242834] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.790 [2024-11-26 20:20:29.242967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71417 ] 00:07:36.048 [2024-11-26 20:20:29.407209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.048 [2024-11-26 20:20:29.487913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.048 [2024-11-26 20:20:29.487949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.048 [2024-11-26 20:20:29.488073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71435 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71435 /var/tmp/spdk2.sock 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71435 /var/tmp/spdk2.sock 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71435 /var/tmp/spdk2.sock 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71435 ']' 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.618 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.878 [2024-11-26 20:20:30.211253] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:36.878 [2024-11-26 20:20:30.211390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71435 ] 00:07:36.878 [2024-11-26 20:20:30.373343] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71417 has claimed it. 00:07:36.878 [2024-11-26 20:20:30.373424] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:37.448 ERROR: process (pid: 71435) is no longer running 00:07:37.448 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71435) - No such process 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71417 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71417 ']' 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71417 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71417 00:07:37.448 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.449 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.449 killing process with pid 71417 00:07:37.449 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71417' 00:07:37.449 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71417 00:07:37.449 20:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71417 00:07:38.018 00:07:38.018 real 0m2.330s 00:07:38.018 user 0m6.037s 00:07:38.018 sys 0m0.657s 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.018 ************************************ 00:07:38.018 END TEST locking_overlapped_coremask 00:07:38.018 ************************************ 00:07:38.018 20:20:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:38.018 20:20:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.018 20:20:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.018 20:20:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.018 ************************************ 00:07:38.018 START TEST locking_overlapped_coremask_via_rpc 00:07:38.018 ************************************ 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71487 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71487 /var/tmp/spdk.sock 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71487 ']' 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.018 20:20:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.278 [2024-11-26 20:20:31.635243] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.278 [2024-11-26 20:20:31.635373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71487 ] 00:07:38.278 [2024-11-26 20:20:31.799037] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.278 [2024-11-26 20:20:31.799110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.538 [2024-11-26 20:20:31.877405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.538 [2024-11-26 20:20:31.877500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.538 [2024-11-26 20:20:31.877683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71501 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71501 /var/tmp/spdk2.sock 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71501 ']' 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:39.108 20:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.108 [2024-11-26 20:20:32.573995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.108 [2024-11-26 20:20:32.574132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71501 ] 00:07:39.488 [2024-11-26 20:20:32.734735] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:39.489 [2024-11-26 20:20:32.734794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.489 [2024-11-26 20:20:32.904291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.489 [2024-11-26 20:20:32.904329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.489 [2024-11-26 20:20:32.904332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.072 [2024-11-26 20:20:33.500893] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71487 has claimed it. 00:07:40.072 request: 00:07:40.072 { 00:07:40.072 "method": "framework_enable_cpumask_locks", 00:07:40.072 "req_id": 1 00:07:40.072 } 00:07:40.072 Got JSON-RPC error response 00:07:40.072 response: 00:07:40.072 { 00:07:40.072 "code": -32603, 00:07:40.072 "message": "Failed to claim CPU core: 2" 00:07:40.072 } 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71487 /var/tmp/spdk.sock 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71487 ']' 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.072 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.073 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.073 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71501 /var/tmp/spdk2.sock 00:07:40.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71501 ']' 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.332 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.591 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.591 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:40.591 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:40.591 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.591 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.591 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.591 00:07:40.591 real 0m2.439s 00:07:40.591 user 0m1.210s 00:07:40.591 sys 0m0.163s 00:07:40.591 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.591 20:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.591 ************************************ 00:07:40.591 END TEST locking_overlapped_coremask_via_rpc 00:07:40.591 ************************************ 00:07:40.591 20:20:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:40.591 20:20:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71487 ]] 00:07:40.591 20:20:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71487 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71487 ']' 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71487 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71487 00:07:40.591 killing process with pid 71487 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71487' 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71487 00:07:40.591 20:20:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71487 00:07:41.159 20:20:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71501 ]] 00:07:41.159 20:20:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71501 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71501 ']' 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71501 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71501 00:07:41.159 killing process with pid 71501 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71501' 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71501 00:07:41.159 20:20:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71501 00:07:41.727 20:20:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.727 20:20:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:41.727 Process with pid 71487 is not found 00:07:41.727 20:20:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71487 ]] 00:07:41.727 20:20:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71487 00:07:41.727 20:20:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71487 ']' 00:07:41.727 20:20:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71487 00:07:41.727 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71487) - No such process 00:07:41.727 20:20:35 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71487 is not found' 00:07:41.727 20:20:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71501 ]] 00:07:41.727 20:20:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71501 00:07:41.727 20:20:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71501 ']' 00:07:41.727 20:20:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71501 00:07:41.727 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71501) - No such process 00:07:41.727 20:20:35 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71501 is not found' 00:07:41.727 Process with pid 71501 is not found 00:07:41.727 20:20:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.727 00:07:41.727 real 0m21.841s 00:07:41.727 user 0m35.318s 00:07:41.727 sys 0m7.149s 00:07:41.727 20:20:35 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.727 20:20:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.727 ************************************ 00:07:41.727 END TEST cpu_locks 00:07:41.727 ************************************ 00:07:41.987 00:07:41.987 real 0m51.107s 00:07:41.987 user 1m34.620s 00:07:41.987 sys 0m11.678s 00:07:41.987 20:20:35 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.987 20:20:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:41.987 ************************************ 00:07:41.987 END TEST event 00:07:41.987 ************************************ 00:07:41.987 20:20:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:41.987 20:20:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.987 20:20:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.987 20:20:35 -- common/autotest_common.sh@10 -- # set +x 00:07:41.987 ************************************ 00:07:41.987 START TEST thread 00:07:41.987 ************************************ 00:07:41.987 20:20:35 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:41.987 * Looking for test storage... 00:07:41.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:41.987 20:20:35 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:41.987 20:20:35 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:41.987 20:20:35 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:42.246 20:20:35 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:42.246 20:20:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.246 20:20:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.246 20:20:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.246 20:20:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.246 20:20:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.246 20:20:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.246 20:20:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.246 20:20:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.246 20:20:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.246 20:20:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.246 20:20:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.246 20:20:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:42.246 20:20:35 thread -- scripts/common.sh@345 -- # : 1 00:07:42.246 20:20:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.247 20:20:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.247 20:20:35 thread -- scripts/common.sh@365 -- # decimal 1 00:07:42.247 20:20:35 thread -- scripts/common.sh@353 -- # local d=1 00:07:42.247 20:20:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.247 20:20:35 thread -- scripts/common.sh@355 -- # echo 1 00:07:42.247 20:20:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.247 20:20:35 thread -- scripts/common.sh@366 -- # decimal 2 00:07:42.247 20:20:35 thread -- scripts/common.sh@353 -- # local d=2 00:07:42.247 20:20:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.247 20:20:35 thread -- scripts/common.sh@355 -- # echo 2 00:07:42.247 20:20:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.247 20:20:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.247 20:20:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.247 20:20:35 thread -- scripts/common.sh@368 -- # return 0 00:07:42.247 20:20:35 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.247 20:20:35 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:42.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.247 --rc genhtml_branch_coverage=1 00:07:42.247 --rc genhtml_function_coverage=1 00:07:42.247 --rc genhtml_legend=1 00:07:42.247 --rc geninfo_all_blocks=1 00:07:42.247 --rc geninfo_unexecuted_blocks=1 00:07:42.247 00:07:42.247 ' 00:07:42.247 20:20:35 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:42.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.247 --rc genhtml_branch_coverage=1 00:07:42.247 --rc genhtml_function_coverage=1 00:07:42.247 --rc genhtml_legend=1 00:07:42.247 --rc geninfo_all_blocks=1 00:07:42.247 --rc geninfo_unexecuted_blocks=1 00:07:42.247 00:07:42.247 ' 00:07:42.247 20:20:35 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:42.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.247 --rc genhtml_branch_coverage=1 00:07:42.247 --rc genhtml_function_coverage=1 00:07:42.247 --rc genhtml_legend=1 00:07:42.247 --rc geninfo_all_blocks=1 00:07:42.247 --rc geninfo_unexecuted_blocks=1 00:07:42.247 00:07:42.247 ' 00:07:42.247 20:20:35 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:42.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.247 --rc genhtml_branch_coverage=1 00:07:42.247 --rc genhtml_function_coverage=1 00:07:42.247 --rc genhtml_legend=1 00:07:42.247 --rc geninfo_all_blocks=1 00:07:42.247 --rc geninfo_unexecuted_blocks=1 00:07:42.247 00:07:42.247 ' 00:07:42.247 20:20:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:42.247 20:20:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:42.247 20:20:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.247 20:20:35 thread -- common/autotest_common.sh@10 -- # set +x 00:07:42.247 ************************************ 00:07:42.247 START TEST thread_poller_perf 00:07:42.247 ************************************ 00:07:42.247 20:20:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:42.247 [2024-11-26 20:20:35.677065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.247 [2024-11-26 20:20:35.677660] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71639 ] 00:07:42.506 [2024-11-26 20:20:35.837101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.506 [2024-11-26 20:20:35.917729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.506 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:43.884 [2024-11-26T20:20:37.436Z] ====================================== 00:07:43.884 [2024-11-26T20:20:37.436Z] busy:2298673804 (cyc) 00:07:43.884 [2024-11-26T20:20:37.436Z] total_run_count: 372000 00:07:43.884 [2024-11-26T20:20:37.436Z] tsc_hz: 2290000000 (cyc) 00:07:43.884 [2024-11-26T20:20:37.436Z] ====================================== 00:07:43.884 [2024-11-26T20:20:37.436Z] poller_cost: 6179 (cyc), 2698 (nsec) 00:07:43.884 00:07:43.884 real 0m1.428s 00:07:43.884 user 0m1.208s 00:07:43.884 sys 0m0.112s 00:07:43.884 20:20:37 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.884 20:20:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.884 ************************************ 00:07:43.884 END TEST thread_poller_perf 00:07:43.884 ************************************ 00:07:43.884 20:20:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.884 20:20:37 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:43.884 20:20:37 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.884 20:20:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.884 ************************************ 00:07:43.884 START TEST thread_poller_perf 00:07:43.884 ************************************ 00:07:43.884 20:20:37 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.884 [2024-11-26 20:20:37.175666] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.884 [2024-11-26 20:20:37.175917] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71675 ] 00:07:43.884 [2024-11-26 20:20:37.336681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.884 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:43.884 [2024-11-26 20:20:37.417435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.263 [2024-11-26T20:20:38.815Z] ====================================== 00:07:45.263 [2024-11-26T20:20:38.815Z] busy:2293476836 (cyc) 00:07:45.263 [2024-11-26T20:20:38.815Z] total_run_count: 5043000 00:07:45.263 [2024-11-26T20:20:38.815Z] tsc_hz: 2290000000 (cyc) 00:07:45.263 [2024-11-26T20:20:38.815Z] ====================================== 00:07:45.263 [2024-11-26T20:20:38.815Z] poller_cost: 454 (cyc), 198 (nsec) 00:07:45.263 00:07:45.263 real 0m1.426s 00:07:45.263 user 0m1.202s 00:07:45.263 sys 0m0.116s 00:07:45.263 ************************************ 00:07:45.263 END TEST thread_poller_perf 00:07:45.263 ************************************ 00:07:45.263 20:20:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.263 20:20:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:45.263 20:20:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:45.263 ************************************ 00:07:45.263 END TEST thread 00:07:45.263 ************************************ 00:07:45.263 00:07:45.263 real 0m3.218s 00:07:45.263 user 0m2.575s 00:07:45.263 sys 0m0.440s 00:07:45.263 20:20:38 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.263 20:20:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.263 20:20:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:45.263 20:20:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:45.263 20:20:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.263 20:20:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.263 20:20:38 -- common/autotest_common.sh@10 -- # set +x 00:07:45.263 ************************************ 00:07:45.263 START TEST app_cmdline 00:07:45.263 ************************************ 00:07:45.263 20:20:38 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:45.263 * Looking for test storage... 00:07:45.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:45.263 20:20:38 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:45.263 20:20:38 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:45.263 20:20:38 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:45.522 20:20:38 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.522 20:20:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:45.523 20:20:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.523 20:20:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.523 20:20:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.523 20:20:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.523 --rc genhtml_branch_coverage=1 00:07:45.523 --rc genhtml_function_coverage=1 00:07:45.523 --rc genhtml_legend=1 00:07:45.523 --rc geninfo_all_blocks=1 00:07:45.523 --rc geninfo_unexecuted_blocks=1 00:07:45.523 00:07:45.523 ' 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.523 --rc genhtml_branch_coverage=1 00:07:45.523 --rc genhtml_function_coverage=1 00:07:45.523 --rc genhtml_legend=1 00:07:45.523 --rc geninfo_all_blocks=1 00:07:45.523 --rc geninfo_unexecuted_blocks=1 00:07:45.523 00:07:45.523 ' 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.523 --rc genhtml_branch_coverage=1 00:07:45.523 --rc genhtml_function_coverage=1 00:07:45.523 --rc genhtml_legend=1 00:07:45.523 --rc geninfo_all_blocks=1 00:07:45.523 --rc geninfo_unexecuted_blocks=1 00:07:45.523 00:07:45.523 ' 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.523 --rc genhtml_branch_coverage=1 00:07:45.523 --rc genhtml_function_coverage=1 00:07:45.523 --rc genhtml_legend=1 00:07:45.523 --rc geninfo_all_blocks=1 00:07:45.523 --rc geninfo_unexecuted_blocks=1 00:07:45.523 00:07:45.523 ' 00:07:45.523 20:20:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:45.523 20:20:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71759 00:07:45.523 20:20:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:45.523 20:20:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71759 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71759 ']' 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.523 20:20:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.523 [2024-11-26 20:20:39.000121] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.523 [2024-11-26 20:20:39.000364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71759 ] 00:07:45.813 [2024-11-26 20:20:39.160705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.813 [2024-11-26 20:20:39.239557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.383 20:20:39 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.383 20:20:39 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:46.383 20:20:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:46.642 { 00:07:46.642 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:46.642 "fields": { 00:07:46.642 "major": 24, 00:07:46.642 "minor": 9, 00:07:46.642 "patch": 1, 00:07:46.642 "suffix": "-pre", 00:07:46.642 "commit": "b18e1bd62" 00:07:46.642 } 00:07:46.642 } 00:07:46.642 20:20:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:46.642 20:20:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:46.642 20:20:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:46.643 20:20:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:46.643 20:20:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:46.643 20:20:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:46.643 20:20:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.643 20:20:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:46.643 20:20:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:46.643 20:20:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:46.643 20:20:40 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:46.903 request: 00:07:46.903 { 00:07:46.903 "method": "env_dpdk_get_mem_stats", 00:07:46.903 "req_id": 1 00:07:46.903 } 00:07:46.903 Got JSON-RPC error response 00:07:46.903 response: 00:07:46.903 { 00:07:46.903 "code": -32601, 00:07:46.903 "message": "Method not found" 00:07:46.903 } 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.903 20:20:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71759 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71759 ']' 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71759 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71759 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.903 killing process with pid 71759 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71759' 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@969 -- # kill 71759 00:07:46.903 20:20:40 app_cmdline -- common/autotest_common.sh@974 -- # wait 71759 00:07:47.474 00:07:47.474 real 0m2.235s 00:07:47.474 user 0m2.425s 00:07:47.474 sys 0m0.645s 00:07:47.474 20:20:40 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.474 ************************************ 00:07:47.474 END TEST app_cmdline 00:07:47.474 ************************************ 00:07:47.474 20:20:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.474 20:20:40 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:47.474 20:20:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.474 20:20:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.474 20:20:40 -- common/autotest_common.sh@10 -- # set +x 00:07:47.474 ************************************ 00:07:47.474 START TEST version 00:07:47.474 ************************************ 00:07:47.474 20:20:40 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:47.734 * Looking for test storage... 00:07:47.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:47.734 20:20:41 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.734 20:20:41 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.734 20:20:41 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:47.734 20:20:41 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:47.734 20:20:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.734 20:20:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.734 20:20:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.734 20:20:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.734 20:20:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.734 20:20:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.734 20:20:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.734 20:20:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.734 20:20:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.734 20:20:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.734 20:20:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.734 20:20:41 version -- scripts/common.sh@344 -- # case "$op" in 00:07:47.734 20:20:41 version -- scripts/common.sh@345 -- # : 1 00:07:47.734 20:20:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.734 20:20:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.734 20:20:41 version -- scripts/common.sh@365 -- # decimal 1 00:07:47.734 20:20:41 version -- scripts/common.sh@353 -- # local d=1 00:07:47.734 20:20:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.734 20:20:41 version -- scripts/common.sh@355 -- # echo 1 00:07:47.734 20:20:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.734 20:20:41 version -- scripts/common.sh@366 -- # decimal 2 00:07:47.734 20:20:41 version -- scripts/common.sh@353 -- # local d=2 00:07:47.734 20:20:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.734 20:20:41 version -- scripts/common.sh@355 -- # echo 2 00:07:47.734 20:20:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.734 20:20:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.734 20:20:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.734 20:20:41 version -- scripts/common.sh@368 -- # return 0 00:07:47.734 20:20:41 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.734 20:20:41 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:47.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.734 --rc genhtml_branch_coverage=1 00:07:47.734 --rc genhtml_function_coverage=1 00:07:47.734 --rc genhtml_legend=1 00:07:47.734 --rc geninfo_all_blocks=1 00:07:47.734 --rc geninfo_unexecuted_blocks=1 00:07:47.734 00:07:47.734 ' 00:07:47.734 20:20:41 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:47.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.734 --rc genhtml_branch_coverage=1 00:07:47.735 --rc genhtml_function_coverage=1 00:07:47.735 --rc genhtml_legend=1 00:07:47.735 --rc geninfo_all_blocks=1 00:07:47.735 --rc geninfo_unexecuted_blocks=1 00:07:47.735 00:07:47.735 ' 00:07:47.735 20:20:41 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:47.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.735 --rc genhtml_branch_coverage=1 00:07:47.735 --rc genhtml_function_coverage=1 00:07:47.735 --rc genhtml_legend=1 00:07:47.735 --rc geninfo_all_blocks=1 00:07:47.735 --rc geninfo_unexecuted_blocks=1 00:07:47.735 00:07:47.735 ' 00:07:47.735 20:20:41 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:47.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.735 --rc genhtml_branch_coverage=1 00:07:47.735 --rc genhtml_function_coverage=1 00:07:47.735 --rc genhtml_legend=1 00:07:47.735 --rc geninfo_all_blocks=1 00:07:47.735 --rc geninfo_unexecuted_blocks=1 00:07:47.735 00:07:47.735 ' 00:07:47.735 20:20:41 version -- app/version.sh@17 -- # get_header_version major 00:07:47.735 20:20:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:47.735 20:20:41 version -- app/version.sh@14 -- # cut -f2 00:07:47.735 20:20:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.735 20:20:41 version -- app/version.sh@17 -- # major=24 00:07:47.735 20:20:41 version -- app/version.sh@18 -- # get_header_version minor 00:07:47.735 20:20:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:47.735 20:20:41 version -- app/version.sh@14 -- # cut -f2 00:07:47.735 20:20:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.735 20:20:41 version -- app/version.sh@18 -- # minor=9 00:07:47.735 20:20:41 version -- app/version.sh@19 -- # get_header_version patch 00:07:47.735 20:20:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:47.735 20:20:41 version -- app/version.sh@14 -- # cut -f2 00:07:47.735 20:20:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.735 20:20:41 version -- app/version.sh@19 -- # patch=1 00:07:47.735 20:20:41 version -- app/version.sh@20 -- # get_header_version suffix 00:07:47.735 20:20:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:47.735 20:20:41 version -- app/version.sh@14 -- # cut -f2 00:07:47.735 20:20:41 version -- app/version.sh@14 -- # tr -d '"' 00:07:47.735 20:20:41 version -- app/version.sh@20 -- # suffix=-pre 00:07:47.735 20:20:41 version -- app/version.sh@22 -- # version=24.9 00:07:47.735 20:20:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:47.735 20:20:41 version -- app/version.sh@25 -- # version=24.9.1 00:07:47.735 20:20:41 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:47.735 20:20:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:47.735 20:20:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:47.995 20:20:41 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:47.995 20:20:41 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:47.995 ************************************ 00:07:47.995 END TEST version 00:07:47.995 ************************************ 00:07:47.995 00:07:47.995 real 0m0.320s 00:07:47.995 user 0m0.177s 00:07:47.995 sys 0m0.199s 00:07:47.995 20:20:41 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.995 20:20:41 version -- common/autotest_common.sh@10 -- # set +x 00:07:47.995 20:20:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:47.995 20:20:41 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:47.995 20:20:41 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:47.995 20:20:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.995 20:20:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.995 20:20:41 -- common/autotest_common.sh@10 -- # set +x 00:07:47.995 ************************************ 00:07:47.995 START TEST bdev_raid 00:07:47.995 ************************************ 00:07:47.995 20:20:41 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:47.995 * Looking for test storage... 00:07:47.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:47.995 20:20:41 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:47.995 20:20:41 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:47.995 20:20:41 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.258 20:20:41 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:48.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.258 --rc genhtml_branch_coverage=1 00:07:48.258 --rc genhtml_function_coverage=1 00:07:48.258 --rc genhtml_legend=1 00:07:48.258 --rc geninfo_all_blocks=1 00:07:48.258 --rc geninfo_unexecuted_blocks=1 00:07:48.258 00:07:48.258 ' 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:48.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.258 --rc genhtml_branch_coverage=1 00:07:48.258 --rc genhtml_function_coverage=1 00:07:48.258 --rc genhtml_legend=1 00:07:48.258 --rc geninfo_all_blocks=1 00:07:48.258 --rc geninfo_unexecuted_blocks=1 00:07:48.258 00:07:48.258 ' 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:48.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.258 --rc genhtml_branch_coverage=1 00:07:48.258 --rc genhtml_function_coverage=1 00:07:48.258 --rc genhtml_legend=1 00:07:48.258 --rc geninfo_all_blocks=1 00:07:48.258 --rc geninfo_unexecuted_blocks=1 00:07:48.258 00:07:48.258 ' 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:48.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.258 --rc genhtml_branch_coverage=1 00:07:48.258 --rc genhtml_function_coverage=1 00:07:48.258 --rc genhtml_legend=1 00:07:48.258 --rc geninfo_all_blocks=1 00:07:48.258 --rc geninfo_unexecuted_blocks=1 00:07:48.258 00:07:48.258 ' 00:07:48.258 20:20:41 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:48.258 20:20:41 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:48.258 20:20:41 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:48.258 20:20:41 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:48.258 20:20:41 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:48.258 20:20:41 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:48.258 20:20:41 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.258 20:20:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.258 ************************************ 00:07:48.258 START TEST raid1_resize_data_offset_test 00:07:48.258 ************************************ 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71924 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71924' 00:07:48.258 Process raid pid: 71924 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71924 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71924 ']' 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.258 20:20:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.258 [2024-11-26 20:20:41.688673] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:48.258 [2024-11-26 20:20:41.689278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.519 [2024-11-26 20:20:41.851445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.519 [2024-11-26 20:20:41.931572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.519 [2024-11-26 20:20:42.004892] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.519 [2024-11-26 20:20:42.005032] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 malloc0 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.089 malloc1 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:49.089 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.090 null0 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.090 [2024-11-26 20:20:42.601322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:49.090 [2024-11-26 20:20:42.603430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:49.090 [2024-11-26 20:20:42.603523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:49.090 [2024-11-26 20:20:42.603712] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:49.090 [2024-11-26 20:20:42.603728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:49.090 [2024-11-26 20:20:42.604048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:49.090 [2024-11-26 20:20:42.604220] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:49.090 [2024-11-26 20:20:42.604240] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:49.090 [2024-11-26 20:20:42.604409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.090 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.351 [2024-11-26 20:20:42.661230] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.351 malloc2 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.351 [2024-11-26 20:20:42.815321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:49.351 [2024-11-26 20:20:42.823058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.351 [2024-11-26 20:20:42.825292] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71924 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71924 ']' 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71924 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.351 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71924 00:07:49.618 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.618 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.618 killing process with pid 71924 00:07:49.618 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71924' 00:07:49.618 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71924 00:07:49.618 [2024-11-26 20:20:42.909220] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.618 20:20:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71924 00:07:49.618 [2024-11-26 20:20:42.909588] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:49.618 [2024-11-26 20:20:42.909693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.618 [2024-11-26 20:20:42.909718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:49.618 [2024-11-26 20:20:42.918878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.618 [2024-11-26 20:20:42.919200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.618 [2024-11-26 20:20:42.919221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:49.878 [2024-11-26 20:20:43.191846] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.138 20:20:43 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:50.138 ************************************ 00:07:50.138 END TEST raid1_resize_data_offset_test 00:07:50.138 ************************************ 00:07:50.138 00:07:50.138 real 0m1.943s 00:07:50.138 user 0m1.837s 00:07:50.138 sys 0m0.546s 00:07:50.138 20:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.138 20:20:43 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.138 20:20:43 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:50.138 20:20:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.138 20:20:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.138 20:20:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.138 ************************************ 00:07:50.138 START TEST raid0_resize_superblock_test 00:07:50.138 ************************************ 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71980 00:07:50.138 Process raid pid: 71980 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71980' 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71980 00:07:50.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71980 ']' 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.138 20:20:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.399 [2024-11-26 20:20:43.694248] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:50.399 [2024-11-26 20:20:43.694427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.399 [2024-11-26 20:20:43.857331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.399 [2024-11-26 20:20:43.938000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.659 [2024-11-26 20:20:44.011694] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.659 [2024-11-26 20:20:44.011726] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.228 malloc0 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.228 [2024-11-26 20:20:44.680767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:51.228 [2024-11-26 20:20:44.680839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.228 [2024-11-26 20:20:44.680865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:51.228 [2024-11-26 20:20:44.680884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.228 [2024-11-26 20:20:44.683129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.228 [2024-11-26 20:20:44.683167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:51.228 pt0 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.228 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 a8c055ff-5e0b-400f-9831-1d0efeffb5e6 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 7d54e18c-2ef9-4893-9d2f-d3ddf004aaea 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 27828267-a5fc-4b70-9074-d9cd28b7784b 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 [2024-11-26 20:20:44.850089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7d54e18c-2ef9-4893-9d2f-d3ddf004aaea is claimed 00:07:51.489 [2024-11-26 20:20:44.850199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 27828267-a5fc-4b70-9074-d9cd28b7784b is claimed 00:07:51.489 [2024-11-26 20:20:44.850327] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:51.489 [2024-11-26 20:20:44.850342] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:51.489 [2024-11-26 20:20:44.850655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:51.489 [2024-11-26 20:20:44.850855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:51.489 [2024-11-26 20:20:44.850865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:51.489 [2024-11-26 20:20:44.851009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:51.489 [2024-11-26 20:20:44.962204] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.489 20:20:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.489 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:51.489 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:51.489 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.489 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 [2024-11-26 20:20:45.010076] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:51.489 [2024-11-26 20:20:45.010157] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7d54e18c-2ef9-4893-9d2f-d3ddf004aaea' was resized: old size 131072, new size 204800 00:07:51.489 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.489 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:51.489 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.489 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.489 [2024-11-26 20:20:45.021908] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:51.489 [2024-11-26 20:20:45.021975] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '27828267-a5fc-4b70-9074-d9cd28b7784b' was resized: old size 131072, new size 204800 00:07:51.490 [2024-11-26 20:20:45.022034] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:51.490 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.490 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:51.490 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:51.490 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.490 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:51.750 [2024-11-26 20:20:45.129945] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.750 [2024-11-26 20:20:45.177629] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:51.750 [2024-11-26 20:20:45.177775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:51.750 [2024-11-26 20:20:45.177809] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.750 [2024-11-26 20:20:45.177856] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:51.750 [2024-11-26 20:20:45.178036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.750 [2024-11-26 20:20:45.178113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.750 [2024-11-26 20:20:45.178165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.750 [2024-11-26 20:20:45.189453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:51.750 [2024-11-26 20:20:45.189571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.750 [2024-11-26 20:20:45.189640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:51.750 [2024-11-26 20:20:45.189684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.750 [2024-11-26 20:20:45.192096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.750 [2024-11-26 20:20:45.192179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:51.750 [2024-11-26 20:20:45.193914] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7d54e18c-2ef9-4893-9d2f-d3ddf004aaea 00:07:51.750 [2024-11-26 20:20:45.194026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7d54e18c-2ef9-4893-9d2f-d3ddf004aaea is claimed 00:07:51.750 [2024-11-26 20:20:45.194180] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 27828267-a5fc-4b70-9074-d9cd28b7784b 00:07:51.750 [2024-11-26 20:20:45.194247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 27828267-a5fc-4b70-9074-d9cd28b7784b is claimed 00:07:51.750 [2024-11-26 20:20:45.194424] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 27828267-a5fc-4b70-9074-d9cd28b7784b (2) smaller than existing raid bdev Raid (3) 00:07:51.750 [2024-11-26 20:20:45.194513] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7d54e18c-2ef9-4893-9d2f-d3ddf004aaea: File exists 00:07:51.750 [2024-11-26 20:20:45.194590] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:51.750 [2024-11-26 20:20:45.194640] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:51.750 pt0 00:07:51.750 [2024-11-26 20:20:45.194951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:51.750 [2024-11-26 20:20:45.195094] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:51.750 [2024-11-26 20:20:45.195104] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:51.750 [2024-11-26 20:20:45.195240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.750 [2024-11-26 20:20:45.218267] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71980 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71980 ']' 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71980 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:51.750 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.751 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71980 00:07:51.751 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.751 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.751 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71980' 00:07:51.751 killing process with pid 71980 00:07:51.751 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71980 00:07:51.751 [2024-11-26 20:20:45.297417] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.751 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71980 00:07:51.751 [2024-11-26 20:20:45.297567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.751 [2024-11-26 20:20:45.297659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.751 [2024-11-26 20:20:45.297711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:52.010 [2024-11-26 20:20:45.519300] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.581 ************************************ 00:07:52.581 END TEST raid0_resize_superblock_test 00:07:52.581 ************************************ 00:07:52.581 20:20:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:52.581 00:07:52.581 real 0m2.279s 00:07:52.581 user 0m2.436s 00:07:52.581 sys 0m0.607s 00:07:52.581 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.581 20:20:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.581 20:20:45 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:52.581 20:20:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:52.581 20:20:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.581 20:20:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.581 ************************************ 00:07:52.581 START TEST raid1_resize_superblock_test 00:07:52.581 ************************************ 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=72057 00:07:52.581 Process raid pid: 72057 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 72057' 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 72057 00:07:52.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72057 ']' 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.581 20:20:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.581 [2024-11-26 20:20:46.031116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:52.581 [2024-11-26 20:20:46.031256] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.840 [2024-11-26 20:20:46.173395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.840 [2024-11-26 20:20:46.254844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.840 [2024-11-26 20:20:46.329120] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.840 [2024-11-26 20:20:46.329151] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.410 20:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.410 20:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:53.410 20:20:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:53.410 20:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.410 20:20:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.670 malloc0 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.670 [2024-11-26 20:20:47.037069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:53.670 [2024-11-26 20:20:47.037176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.670 [2024-11-26 20:20:47.037201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:53.670 [2024-11-26 20:20:47.037212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.670 [2024-11-26 20:20:47.039459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.670 [2024-11-26 20:20:47.039504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:53.670 pt0 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.670 ae65c69f-cd40-4abb-97d3-a837eeb73e6e 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.670 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.670 e426e93c-96a7-41c1-8a99-93e54825fec1 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.671 045c6a19-1565-46e5-9310-f7ff59112b51 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.671 [2024-11-26 20:20:47.204970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e426e93c-96a7-41c1-8a99-93e54825fec1 is claimed 00:07:53.671 [2024-11-26 20:20:47.205067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 045c6a19-1565-46e5-9310-f7ff59112b51 is claimed 00:07:53.671 [2024-11-26 20:20:47.205203] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:53.671 [2024-11-26 20:20:47.205225] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:53.671 [2024-11-26 20:20:47.205513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:53.671 [2024-11-26 20:20:47.205730] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:53.671 [2024-11-26 20:20:47.205743] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:53.671 [2024-11-26 20:20:47.205929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.671 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.931 [2024-11-26 20:20:47.297068] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.931 [2024-11-26 20:20:47.333054] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:53.931 [2024-11-26 20:20:47.333084] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e426e93c-96a7-41c1-8a99-93e54825fec1' was resized: old size 131072, new size 204800 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.931 [2024-11-26 20:20:47.344864] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:53.931 [2024-11-26 20:20:47.344889] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '045c6a19-1565-46e5-9310-f7ff59112b51' was resized: old size 131072, new size 204800 00:07:53.931 [2024-11-26 20:20:47.344919] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:53.931 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.932 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.932 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:53.932 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:53.932 [2024-11-26 20:20:47.452812] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.932 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.191 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:54.191 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:54.191 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:54.191 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.192 [2024-11-26 20:20:47.496563] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:54.192 [2024-11-26 20:20:47.496693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:54.192 [2024-11-26 20:20:47.496740] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:54.192 [2024-11-26 20:20:47.496938] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.192 [2024-11-26 20:20:47.497153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.192 [2024-11-26 20:20:47.497221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.192 [2024-11-26 20:20:47.497236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.192 [2024-11-26 20:20:47.508436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:54.192 [2024-11-26 20:20:47.508535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.192 [2024-11-26 20:20:47.508560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:54.192 [2024-11-26 20:20:47.508574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.192 [2024-11-26 20:20:47.510989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.192 [2024-11-26 20:20:47.511028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:54.192 [2024-11-26 20:20:47.512692] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e426e93c-96a7-41c1-8a99-93e54825fec1 00:07:54.192 [2024-11-26 20:20:47.512757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev e426e93c-96a7-41c1-8a99-93e54825fec1 is claimed 00:07:54.192 [2024-11-26 20:20:47.512850] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 045c6a19-1565-46e5-9310-f7ff59112b51 00:07:54.192 [2024-11-26 20:20:47.512872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 045c6a19-1565-46e5-9310-f7ff59112b51 is claimed 00:07:54.192 [2024-11-26 20:20:47.513021] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 045c6a19-1565-46e5-9310-f7ff59112b51 (2) smaller than existing raid bdev Raid (3) 00:07:54.192 [2024-11-26 20:20:47.513042] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e426e93c-96a7-41c1-8a99-93e54825fec1: File exists 00:07:54.192 [2024-11-26 20:20:47.513081] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:54.192 [2024-11-26 20:20:47.513091] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:54.192 [2024-11-26 20:20:47.513315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:54.192 [2024-11-26 20:20:47.513441] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:54.192 [2024-11-26 20:20:47.513449] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:54.192 pt0 00:07:54.192 [2024-11-26 20:20:47.513566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.192 [2024-11-26 20:20:47.537074] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 72057 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72057 ']' 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72057 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72057 00:07:54.192 killing process with pid 72057 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72057' 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 72057 00:07:54.192 [2024-11-26 20:20:47.618167] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.192 [2024-11-26 20:20:47.618262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.192 [2024-11-26 20:20:47.618316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.192 [2024-11-26 20:20:47.618325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:54.192 20:20:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 72057 00:07:54.451 [2024-11-26 20:20:47.839471] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.768 ************************************ 00:07:54.769 END TEST raid1_resize_superblock_test 00:07:54.769 ************************************ 00:07:54.769 20:20:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:54.769 00:07:54.769 real 0m2.259s 00:07:54.769 user 0m2.446s 00:07:54.769 sys 0m0.584s 00:07:54.769 20:20:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.769 20:20:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.769 20:20:48 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:54.769 20:20:48 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:54.769 20:20:48 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:54.769 20:20:48 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:54.769 20:20:48 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:54.769 20:20:48 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:54.769 20:20:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:54.769 20:20:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.769 20:20:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.769 ************************************ 00:07:54.769 START TEST raid_function_test_raid0 00:07:54.769 ************************************ 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=72132 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.769 Process raid pid: 72132 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72132' 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 72132 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 72132 ']' 00:07:54.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.769 20:20:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.043 [2024-11-26 20:20:48.364012] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:55.043 [2024-11-26 20:20:48.364250] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.043 [2024-11-26 20:20:48.507247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.043 [2024-11-26 20:20:48.589569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.302 [2024-11-26 20:20:48.664867] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.302 [2024-11-26 20:20:48.664903] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.873 Base_1 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.873 Base_2 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.873 [2024-11-26 20:20:49.251878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:55.873 [2024-11-26 20:20:49.253808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:55.873 [2024-11-26 20:20:49.253954] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:55.873 [2024-11-26 20:20:49.253977] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:55.873 [2024-11-26 20:20:49.254269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:55.873 [2024-11-26 20:20:49.254412] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:55.873 [2024-11-26 20:20:49.254422] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:55.873 [2024-11-26 20:20:49.254578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:55.873 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:56.133 [2024-11-26 20:20:49.487490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.133 /dev/nbd0 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.133 1+0 records in 00:07:56.133 1+0 records out 00:07:56.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583982 s, 7.0 MB/s 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:56.133 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:56.394 { 00:07:56.394 "nbd_device": "/dev/nbd0", 00:07:56.394 "bdev_name": "raid" 00:07:56.394 } 00:07:56.394 ]' 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:56.394 { 00:07:56.394 "nbd_device": "/dev/nbd0", 00:07:56.394 "bdev_name": "raid" 00:07:56.394 } 00:07:56.394 ]' 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:56.394 4096+0 records in 00:07:56.394 4096+0 records out 00:07:56.394 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0371825 s, 56.4 MB/s 00:07:56.394 20:20:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:56.653 4096+0 records in 00:07:56.653 4096+0 records out 00:07:56.653 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.213232 s, 9.8 MB/s 00:07:56.653 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:56.654 128+0 records in 00:07:56.654 128+0 records out 00:07:56.654 65536 bytes (66 kB, 64 KiB) copied, 0.00117344 s, 55.8 MB/s 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:56.654 2035+0 records in 00:07:56.654 2035+0 records out 00:07:56.654 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.007819 s, 133 MB/s 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:56.654 456+0 records in 00:07:56.654 456+0 records out 00:07:56.654 233472 bytes (233 kB, 228 KiB) copied, 0.00402949 s, 57.9 MB/s 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.654 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:57.222 [2024-11-26 20:20:50.467367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 72132 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 72132 ']' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 72132 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72132 00:07:57.222 killing process with pid 72132 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72132' 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 72132 00:07:57.222 [2024-11-26 20:20:50.762895] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.222 [2024-11-26 20:20:50.763034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.222 20:20:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 72132 00:07:57.222 [2024-11-26 20:20:50.763089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.222 [2024-11-26 20:20:50.763102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:57.481 [2024-11-26 20:20:50.801557] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.741 ************************************ 00:07:57.741 END TEST raid_function_test_raid0 00:07:57.741 ************************************ 00:07:57.741 20:20:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:57.741 00:07:57.741 real 0m2.884s 00:07:57.741 user 0m3.444s 00:07:57.741 sys 0m1.011s 00:07:57.741 20:20:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.741 20:20:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:57.741 20:20:51 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:57.741 20:20:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:57.741 20:20:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.741 20:20:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.741 ************************************ 00:07:57.741 START TEST raid_function_test_concat 00:07:57.741 ************************************ 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72250 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72250' 00:07:57.741 Process raid pid: 72250 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72250 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72250 ']' 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.741 20:20:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:57.999 [2024-11-26 20:20:51.315628] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.999 [2024-11-26 20:20:51.315867] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.000 [2024-11-26 20:20:51.478192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.259 [2024-11-26 20:20:51.560926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.259 [2024-11-26 20:20:51.636230] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.259 [2024-11-26 20:20:51.636342] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.827 Base_1 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.827 Base_2 00:07:58.827 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.828 [2024-11-26 20:20:52.234031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:58.828 [2024-11-26 20:20:52.236218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:58.828 [2024-11-26 20:20:52.236326] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:58.828 [2024-11-26 20:20:52.236340] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:58.828 [2024-11-26 20:20:52.236688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:58.828 [2024-11-26 20:20:52.236855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:58.828 [2024-11-26 20:20:52.236872] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:58.828 [2024-11-26 20:20:52.237036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:58.828 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:59.089 [2024-11-26 20:20:52.485640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.089 /dev/nbd0 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.089 1+0 records in 00:07:59.089 1+0 records out 00:07:59.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046007 s, 8.9 MB/s 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:59.089 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:59.350 { 00:07:59.350 "nbd_device": "/dev/nbd0", 00:07:59.350 "bdev_name": "raid" 00:07:59.350 } 00:07:59.350 ]' 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:59.350 { 00:07:59.350 "nbd_device": "/dev/nbd0", 00:07:59.350 "bdev_name": "raid" 00:07:59.350 } 00:07:59.350 ]' 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:59.350 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:59.351 4096+0 records in 00:07:59.351 4096+0 records out 00:07:59.351 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0292111 s, 71.8 MB/s 00:07:59.351 20:20:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:59.614 4096+0 records in 00:07:59.614 4096+0 records out 00:07:59.614 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.219466 s, 9.6 MB/s 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:59.614 128+0 records in 00:07:59.614 128+0 records out 00:07:59.614 65536 bytes (66 kB, 64 KiB) copied, 0.00106923 s, 61.3 MB/s 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:59.614 2035+0 records in 00:07:59.614 2035+0 records out 00:07:59.614 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0151501 s, 68.8 MB/s 00:07:59.614 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:59.873 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:59.873 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.873 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:59.873 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.873 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:59.873 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:59.873 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:59.873 456+0 records in 00:07:59.873 456+0 records out 00:07:59.873 233472 bytes (233 kB, 228 KiB) copied, 0.00365596 s, 63.9 MB/s 00:07:59.873 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.874 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:00.134 [2024-11-26 20:20:53.447045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:00.134 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.393 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72250 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72250 ']' 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72250 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72250 00:08:00.394 killing process with pid 72250 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72250' 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72250 00:08:00.394 20:20:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72250 00:08:00.394 [2024-11-26 20:20:53.785877] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.394 [2024-11-26 20:20:53.786009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.394 [2024-11-26 20:20:53.786076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.394 [2024-11-26 20:20:53.786091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:08:00.394 [2024-11-26 20:20:53.824144] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.654 ************************************ 00:08:00.654 END TEST raid_function_test_concat 00:08:00.654 ************************************ 00:08:00.654 20:20:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:00.654 00:08:00.654 real 0m2.966s 00:08:00.654 user 0m3.636s 00:08:00.654 sys 0m0.981s 00:08:00.654 20:20:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.654 20:20:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:00.913 20:20:54 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:00.913 20:20:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.913 20:20:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.913 20:20:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.913 ************************************ 00:08:00.913 START TEST raid0_resize_test 00:08:00.913 ************************************ 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72366 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72366' 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.913 Process raid pid: 72366 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72366 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72366 ']' 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.913 20:20:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.913 [2024-11-26 20:20:54.354376] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:00.913 [2024-11-26 20:20:54.354536] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.172 [2024-11-26 20:20:54.503755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.172 [2024-11-26 20:20:54.587409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.172 [2024-11-26 20:20:54.662146] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.172 [2024-11-26 20:20:54.662192] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.740 Base_1 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.740 Base_2 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.740 [2024-11-26 20:20:55.244832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:01.740 [2024-11-26 20:20:55.246860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:01.740 [2024-11-26 20:20:55.246943] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:01.740 [2024-11-26 20:20:55.246976] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:01.740 [2024-11-26 20:20:55.247343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:08:01.740 [2024-11-26 20:20:55.247495] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:01.740 [2024-11-26 20:20:55.247514] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:08:01.740 [2024-11-26 20:20:55.247738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.740 [2024-11-26 20:20:55.252773] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:01.740 [2024-11-26 20:20:55.252809] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:01.740 true 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.740 [2024-11-26 20:20:55.264999] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.740 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.000 [2024-11-26 20:20:55.304752] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:02.000 [2024-11-26 20:20:55.304789] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:02.000 [2024-11-26 20:20:55.304840] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:02.000 true 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.000 [2024-11-26 20:20:55.316908] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72366 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72366 ']' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72366 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72366 00:08:02.000 killing process with pid 72366 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72366' 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72366 00:08:02.000 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72366 00:08:02.000 [2024-11-26 20:20:55.405449] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.000 [2024-11-26 20:20:55.405603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.000 [2024-11-26 20:20:55.405702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.000 [2024-11-26 20:20:55.405731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:08:02.000 [2024-11-26 20:20:55.408130] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.260 ************************************ 00:08:02.260 END TEST raid0_resize_test 00:08:02.260 20:20:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:02.260 00:08:02.260 real 0m1.518s 00:08:02.260 user 0m1.633s 00:08:02.260 sys 0m0.401s 00:08:02.260 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.260 20:20:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.260 ************************************ 00:08:02.520 20:20:55 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:02.520 20:20:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:02.520 20:20:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.520 20:20:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.520 ************************************ 00:08:02.520 START TEST raid1_resize_test 00:08:02.520 ************************************ 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72411 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72411' 00:08:02.520 Process raid pid: 72411 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72411 00:08:02.520 20:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72411 ']' 00:08:02.521 20:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.521 20:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.521 20:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.521 20:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.521 20:20:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.521 [2024-11-26 20:20:55.936972] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:02.521 [2024-11-26 20:20:55.937140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.779 [2024-11-26 20:20:56.104865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.779 [2024-11-26 20:20:56.189722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.779 [2024-11-26 20:20:56.265040] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.779 [2024-11-26 20:20:56.265083] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.351 Base_1 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.351 Base_2 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.351 [2024-11-26 20:20:56.831688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:03.351 [2024-11-26 20:20:56.833740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:03.351 [2024-11-26 20:20:56.833813] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:03.351 [2024-11-26 20:20:56.833824] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:03.351 [2024-11-26 20:20:56.834149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:08:03.351 [2024-11-26 20:20:56.834290] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:03.351 [2024-11-26 20:20:56.834300] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:08:03.351 [2024-11-26 20:20:56.834470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.351 [2024-11-26 20:20:56.839626] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:03.351 [2024-11-26 20:20:56.839675] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:03.351 true 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.351 [2024-11-26 20:20:56.851886] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.351 [2024-11-26 20:20:56.895589] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:03.351 [2024-11-26 20:20:56.895642] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:03.351 [2024-11-26 20:20:56.895678] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:03.351 true 00:08:03.351 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.609 [2024-11-26 20:20:56.907764] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72411 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72411 ']' 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72411 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72411 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.609 killing process with pid 72411 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72411' 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72411 00:08:03.609 20:20:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72411 00:08:03.609 [2024-11-26 20:20:56.994689] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.609 [2024-11-26 20:20:56.994805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.609 [2024-11-26 20:20:56.995336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.609 [2024-11-26 20:20:56.995360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:08:03.609 [2024-11-26 20:20:56.997284] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.869 20:20:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:03.869 00:08:03.869 real 0m1.529s 00:08:03.869 user 0m1.651s 00:08:03.869 sys 0m0.398s 00:08:03.869 20:20:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.869 20:20:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.869 ************************************ 00:08:03.869 END TEST raid1_resize_test 00:08:03.869 ************************************ 00:08:04.127 20:20:57 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:04.127 20:20:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:04.127 20:20:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:04.127 20:20:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:04.127 20:20:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.127 20:20:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.127 ************************************ 00:08:04.127 START TEST raid_state_function_test 00:08:04.127 ************************************ 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72468 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72468' 00:08:04.127 Process raid pid: 72468 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72468 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72468 ']' 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.127 20:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.127 [2024-11-26 20:20:57.542750] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.127 [2024-11-26 20:20:57.542900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.385 [2024-11-26 20:20:57.707091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.385 [2024-11-26 20:20:57.792980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.385 [2024-11-26 20:20:57.869873] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.385 [2024-11-26 20:20:57.869918] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.953 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.953 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:04.953 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.953 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.953 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.953 [2024-11-26 20:20:58.433039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.953 [2024-11-26 20:20:58.433106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.954 [2024-11-26 20:20:58.433119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.954 [2024-11-26 20:20:58.433129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.954 "name": "Existed_Raid", 00:08:04.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.954 "strip_size_kb": 64, 00:08:04.954 "state": "configuring", 00:08:04.954 "raid_level": "raid0", 00:08:04.954 "superblock": false, 00:08:04.954 "num_base_bdevs": 2, 00:08:04.954 "num_base_bdevs_discovered": 0, 00:08:04.954 "num_base_bdevs_operational": 2, 00:08:04.954 "base_bdevs_list": [ 00:08:04.954 { 00:08:04.954 "name": "BaseBdev1", 00:08:04.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.954 "is_configured": false, 00:08:04.954 "data_offset": 0, 00:08:04.954 "data_size": 0 00:08:04.954 }, 00:08:04.954 { 00:08:04.954 "name": "BaseBdev2", 00:08:04.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.954 "is_configured": false, 00:08:04.954 "data_offset": 0, 00:08:04.954 "data_size": 0 00:08:04.954 } 00:08:04.954 ] 00:08:04.954 }' 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.954 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.521 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.521 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.521 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.521 [2024-11-26 20:20:58.896168] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.521 [2024-11-26 20:20:58.896225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:05.521 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.521 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.521 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.521 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.521 [2024-11-26 20:20:58.904190] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.521 [2024-11-26 20:20:58.904248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.521 [2024-11-26 20:20:58.904257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.521 [2024-11-26 20:20:58.904266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.521 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.522 [2024-11-26 20:20:58.924015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.522 BaseBdev1 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.522 [ 00:08:05.522 { 00:08:05.522 "name": "BaseBdev1", 00:08:05.522 "aliases": [ 00:08:05.522 "5b43d5f0-5a08-4c3f-9c46-5706a64c26d8" 00:08:05.522 ], 00:08:05.522 "product_name": "Malloc disk", 00:08:05.522 "block_size": 512, 00:08:05.522 "num_blocks": 65536, 00:08:05.522 "uuid": "5b43d5f0-5a08-4c3f-9c46-5706a64c26d8", 00:08:05.522 "assigned_rate_limits": { 00:08:05.522 "rw_ios_per_sec": 0, 00:08:05.522 "rw_mbytes_per_sec": 0, 00:08:05.522 "r_mbytes_per_sec": 0, 00:08:05.522 "w_mbytes_per_sec": 0 00:08:05.522 }, 00:08:05.522 "claimed": true, 00:08:05.522 "claim_type": "exclusive_write", 00:08:05.522 "zoned": false, 00:08:05.522 "supported_io_types": { 00:08:05.522 "read": true, 00:08:05.522 "write": true, 00:08:05.522 "unmap": true, 00:08:05.522 "flush": true, 00:08:05.522 "reset": true, 00:08:05.522 "nvme_admin": false, 00:08:05.522 "nvme_io": false, 00:08:05.522 "nvme_io_md": false, 00:08:05.522 "write_zeroes": true, 00:08:05.522 "zcopy": true, 00:08:05.522 "get_zone_info": false, 00:08:05.522 "zone_management": false, 00:08:05.522 "zone_append": false, 00:08:05.522 "compare": false, 00:08:05.522 "compare_and_write": false, 00:08:05.522 "abort": true, 00:08:05.522 "seek_hole": false, 00:08:05.522 "seek_data": false, 00:08:05.522 "copy": true, 00:08:05.522 "nvme_iov_md": false 00:08:05.522 }, 00:08:05.522 "memory_domains": [ 00:08:05.522 { 00:08:05.522 "dma_device_id": "system", 00:08:05.522 "dma_device_type": 1 00:08:05.522 }, 00:08:05.522 { 00:08:05.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.522 "dma_device_type": 2 00:08:05.522 } 00:08:05.522 ], 00:08:05.522 "driver_specific": {} 00:08:05.522 } 00:08:05.522 ] 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.522 20:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.522 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.522 "name": "Existed_Raid", 00:08:05.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.522 "strip_size_kb": 64, 00:08:05.522 "state": "configuring", 00:08:05.522 "raid_level": "raid0", 00:08:05.522 "superblock": false, 00:08:05.522 "num_base_bdevs": 2, 00:08:05.522 "num_base_bdevs_discovered": 1, 00:08:05.522 "num_base_bdevs_operational": 2, 00:08:05.522 "base_bdevs_list": [ 00:08:05.522 { 00:08:05.522 "name": "BaseBdev1", 00:08:05.522 "uuid": "5b43d5f0-5a08-4c3f-9c46-5706a64c26d8", 00:08:05.522 "is_configured": true, 00:08:05.522 "data_offset": 0, 00:08:05.522 "data_size": 65536 00:08:05.522 }, 00:08:05.522 { 00:08:05.522 "name": "BaseBdev2", 00:08:05.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.522 "is_configured": false, 00:08:05.522 "data_offset": 0, 00:08:05.522 "data_size": 0 00:08:05.522 } 00:08:05.522 ] 00:08:05.522 }' 00:08:05.522 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.522 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.088 [2024-11-26 20:20:59.443386] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.088 [2024-11-26 20:20:59.443452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.088 [2024-11-26 20:20:59.451413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.088 [2024-11-26 20:20:59.453554] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.088 [2024-11-26 20:20:59.453606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.088 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.089 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.089 "name": "Existed_Raid", 00:08:06.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.089 "strip_size_kb": 64, 00:08:06.089 "state": "configuring", 00:08:06.089 "raid_level": "raid0", 00:08:06.089 "superblock": false, 00:08:06.089 "num_base_bdevs": 2, 00:08:06.089 "num_base_bdevs_discovered": 1, 00:08:06.089 "num_base_bdevs_operational": 2, 00:08:06.089 "base_bdevs_list": [ 00:08:06.089 { 00:08:06.089 "name": "BaseBdev1", 00:08:06.089 "uuid": "5b43d5f0-5a08-4c3f-9c46-5706a64c26d8", 00:08:06.089 "is_configured": true, 00:08:06.089 "data_offset": 0, 00:08:06.089 "data_size": 65536 00:08:06.089 }, 00:08:06.089 { 00:08:06.089 "name": "BaseBdev2", 00:08:06.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.089 "is_configured": false, 00:08:06.089 "data_offset": 0, 00:08:06.089 "data_size": 0 00:08:06.089 } 00:08:06.089 ] 00:08:06.089 }' 00:08:06.089 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.089 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.346 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:06.346 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.346 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.606 [2024-11-26 20:20:59.903297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.606 [2024-11-26 20:20:59.903366] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:06.606 [2024-11-26 20:20:59.903395] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:06.606 [2024-11-26 20:20:59.903805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:06.606 [2024-11-26 20:20:59.904001] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:06.606 [2024-11-26 20:20:59.904043] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:06.606 [2024-11-26 20:20:59.904326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.606 BaseBdev2 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.606 [ 00:08:06.606 { 00:08:06.606 "name": "BaseBdev2", 00:08:06.606 "aliases": [ 00:08:06.606 "9cbe5b80-96a4-4e14-b660-0df4119d9e1c" 00:08:06.606 ], 00:08:06.606 "product_name": "Malloc disk", 00:08:06.606 "block_size": 512, 00:08:06.606 "num_blocks": 65536, 00:08:06.606 "uuid": "9cbe5b80-96a4-4e14-b660-0df4119d9e1c", 00:08:06.606 "assigned_rate_limits": { 00:08:06.606 "rw_ios_per_sec": 0, 00:08:06.606 "rw_mbytes_per_sec": 0, 00:08:06.606 "r_mbytes_per_sec": 0, 00:08:06.606 "w_mbytes_per_sec": 0 00:08:06.606 }, 00:08:06.606 "claimed": true, 00:08:06.606 "claim_type": "exclusive_write", 00:08:06.606 "zoned": false, 00:08:06.606 "supported_io_types": { 00:08:06.606 "read": true, 00:08:06.606 "write": true, 00:08:06.606 "unmap": true, 00:08:06.606 "flush": true, 00:08:06.606 "reset": true, 00:08:06.606 "nvme_admin": false, 00:08:06.606 "nvme_io": false, 00:08:06.606 "nvme_io_md": false, 00:08:06.606 "write_zeroes": true, 00:08:06.606 "zcopy": true, 00:08:06.606 "get_zone_info": false, 00:08:06.606 "zone_management": false, 00:08:06.606 "zone_append": false, 00:08:06.606 "compare": false, 00:08:06.606 "compare_and_write": false, 00:08:06.606 "abort": true, 00:08:06.606 "seek_hole": false, 00:08:06.606 "seek_data": false, 00:08:06.606 "copy": true, 00:08:06.606 "nvme_iov_md": false 00:08:06.606 }, 00:08:06.606 "memory_domains": [ 00:08:06.606 { 00:08:06.606 "dma_device_id": "system", 00:08:06.606 "dma_device_type": 1 00:08:06.606 }, 00:08:06.606 { 00:08:06.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.606 "dma_device_type": 2 00:08:06.606 } 00:08:06.606 ], 00:08:06.606 "driver_specific": {} 00:08:06.606 } 00:08:06.606 ] 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.606 "name": "Existed_Raid", 00:08:06.606 "uuid": "8f511482-9e81-4fb9-ba2e-68d548635453", 00:08:06.606 "strip_size_kb": 64, 00:08:06.606 "state": "online", 00:08:06.606 "raid_level": "raid0", 00:08:06.606 "superblock": false, 00:08:06.606 "num_base_bdevs": 2, 00:08:06.606 "num_base_bdevs_discovered": 2, 00:08:06.606 "num_base_bdevs_operational": 2, 00:08:06.606 "base_bdevs_list": [ 00:08:06.606 { 00:08:06.606 "name": "BaseBdev1", 00:08:06.606 "uuid": "5b43d5f0-5a08-4c3f-9c46-5706a64c26d8", 00:08:06.606 "is_configured": true, 00:08:06.606 "data_offset": 0, 00:08:06.606 "data_size": 65536 00:08:06.606 }, 00:08:06.606 { 00:08:06.606 "name": "BaseBdev2", 00:08:06.606 "uuid": "9cbe5b80-96a4-4e14-b660-0df4119d9e1c", 00:08:06.606 "is_configured": true, 00:08:06.606 "data_offset": 0, 00:08:06.606 "data_size": 65536 00:08:06.606 } 00:08:06.606 ] 00:08:06.606 }' 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.606 20:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.864 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.865 [2024-11-26 20:21:00.390904] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.865 "name": "Existed_Raid", 00:08:06.865 "aliases": [ 00:08:06.865 "8f511482-9e81-4fb9-ba2e-68d548635453" 00:08:06.865 ], 00:08:06.865 "product_name": "Raid Volume", 00:08:06.865 "block_size": 512, 00:08:06.865 "num_blocks": 131072, 00:08:06.865 "uuid": "8f511482-9e81-4fb9-ba2e-68d548635453", 00:08:06.865 "assigned_rate_limits": { 00:08:06.865 "rw_ios_per_sec": 0, 00:08:06.865 "rw_mbytes_per_sec": 0, 00:08:06.865 "r_mbytes_per_sec": 0, 00:08:06.865 "w_mbytes_per_sec": 0 00:08:06.865 }, 00:08:06.865 "claimed": false, 00:08:06.865 "zoned": false, 00:08:06.865 "supported_io_types": { 00:08:06.865 "read": true, 00:08:06.865 "write": true, 00:08:06.865 "unmap": true, 00:08:06.865 "flush": true, 00:08:06.865 "reset": true, 00:08:06.865 "nvme_admin": false, 00:08:06.865 "nvme_io": false, 00:08:06.865 "nvme_io_md": false, 00:08:06.865 "write_zeroes": true, 00:08:06.865 "zcopy": false, 00:08:06.865 "get_zone_info": false, 00:08:06.865 "zone_management": false, 00:08:06.865 "zone_append": false, 00:08:06.865 "compare": false, 00:08:06.865 "compare_and_write": false, 00:08:06.865 "abort": false, 00:08:06.865 "seek_hole": false, 00:08:06.865 "seek_data": false, 00:08:06.865 "copy": false, 00:08:06.865 "nvme_iov_md": false 00:08:06.865 }, 00:08:06.865 "memory_domains": [ 00:08:06.865 { 00:08:06.865 "dma_device_id": "system", 00:08:06.865 "dma_device_type": 1 00:08:06.865 }, 00:08:06.865 { 00:08:06.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.865 "dma_device_type": 2 00:08:06.865 }, 00:08:06.865 { 00:08:06.865 "dma_device_id": "system", 00:08:06.865 "dma_device_type": 1 00:08:06.865 }, 00:08:06.865 { 00:08:06.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.865 "dma_device_type": 2 00:08:06.865 } 00:08:06.865 ], 00:08:06.865 "driver_specific": { 00:08:06.865 "raid": { 00:08:06.865 "uuid": "8f511482-9e81-4fb9-ba2e-68d548635453", 00:08:06.865 "strip_size_kb": 64, 00:08:06.865 "state": "online", 00:08:06.865 "raid_level": "raid0", 00:08:06.865 "superblock": false, 00:08:06.865 "num_base_bdevs": 2, 00:08:06.865 "num_base_bdevs_discovered": 2, 00:08:06.865 "num_base_bdevs_operational": 2, 00:08:06.865 "base_bdevs_list": [ 00:08:06.865 { 00:08:06.865 "name": "BaseBdev1", 00:08:06.865 "uuid": "5b43d5f0-5a08-4c3f-9c46-5706a64c26d8", 00:08:06.865 "is_configured": true, 00:08:06.865 "data_offset": 0, 00:08:06.865 "data_size": 65536 00:08:06.865 }, 00:08:06.865 { 00:08:06.865 "name": "BaseBdev2", 00:08:06.865 "uuid": "9cbe5b80-96a4-4e14-b660-0df4119d9e1c", 00:08:06.865 "is_configured": true, 00:08:06.865 "data_offset": 0, 00:08:06.865 "data_size": 65536 00:08:06.865 } 00:08:06.865 ] 00:08:06.865 } 00:08:06.865 } 00:08:06.865 }' 00:08:06.865 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:07.123 BaseBdev2' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.123 [2024-11-26 20:21:00.606253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.123 [2024-11-26 20:21:00.606304] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.123 [2024-11-26 20:21:00.606374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.123 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.382 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.382 "name": "Existed_Raid", 00:08:07.382 "uuid": "8f511482-9e81-4fb9-ba2e-68d548635453", 00:08:07.382 "strip_size_kb": 64, 00:08:07.382 "state": "offline", 00:08:07.382 "raid_level": "raid0", 00:08:07.382 "superblock": false, 00:08:07.382 "num_base_bdevs": 2, 00:08:07.382 "num_base_bdevs_discovered": 1, 00:08:07.382 "num_base_bdevs_operational": 1, 00:08:07.382 "base_bdevs_list": [ 00:08:07.382 { 00:08:07.382 "name": null, 00:08:07.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.382 "is_configured": false, 00:08:07.382 "data_offset": 0, 00:08:07.382 "data_size": 65536 00:08:07.382 }, 00:08:07.382 { 00:08:07.382 "name": "BaseBdev2", 00:08:07.382 "uuid": "9cbe5b80-96a4-4e14-b660-0df4119d9e1c", 00:08:07.382 "is_configured": true, 00:08:07.382 "data_offset": 0, 00:08:07.382 "data_size": 65536 00:08:07.382 } 00:08:07.382 ] 00:08:07.382 }' 00:08:07.382 20:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.382 20:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.641 [2024-11-26 20:21:01.121621] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.641 [2024-11-26 20:21:01.121736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.641 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.900 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:07.900 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:07.900 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:07.900 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72468 00:08:07.900 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72468 ']' 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72468 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72468 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.901 killing process with pid 72468 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72468' 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72468 00:08:07.901 [2024-11-26 20:21:01.246471] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.901 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72468 00:08:07.901 [2024-11-26 20:21:01.248152] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:08.161 00:08:08.161 real 0m4.185s 00:08:08.161 user 0m6.434s 00:08:08.161 sys 0m0.888s 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.161 ************************************ 00:08:08.161 END TEST raid_state_function_test 00:08:08.161 ************************************ 00:08:08.161 20:21:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:08.161 20:21:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:08.161 20:21:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.161 20:21:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.161 ************************************ 00:08:08.161 START TEST raid_state_function_test_sb 00:08:08.161 ************************************ 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.161 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72710 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.421 Process raid pid: 72710 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72710' 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72710 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72710 ']' 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.421 20:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.421 [2024-11-26 20:21:01.794768] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.421 [2024-11-26 20:21:01.794939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.421 [2024-11-26 20:21:01.961392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.681 [2024-11-26 20:21:02.044549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.681 [2024-11-26 20:21:02.118129] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.681 [2024-11-26 20:21:02.118177] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.250 [2024-11-26 20:21:02.675964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.250 [2024-11-26 20:21:02.676031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.250 [2024-11-26 20:21:02.676053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.250 [2024-11-26 20:21:02.676067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.250 "name": "Existed_Raid", 00:08:09.250 "uuid": "17df1400-cddd-49e4-b5b9-d6f6f2dd589b", 00:08:09.250 "strip_size_kb": 64, 00:08:09.250 "state": "configuring", 00:08:09.250 "raid_level": "raid0", 00:08:09.250 "superblock": true, 00:08:09.250 "num_base_bdevs": 2, 00:08:09.250 "num_base_bdevs_discovered": 0, 00:08:09.250 "num_base_bdevs_operational": 2, 00:08:09.250 "base_bdevs_list": [ 00:08:09.250 { 00:08:09.250 "name": "BaseBdev1", 00:08:09.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.250 "is_configured": false, 00:08:09.250 "data_offset": 0, 00:08:09.250 "data_size": 0 00:08:09.250 }, 00:08:09.250 { 00:08:09.250 "name": "BaseBdev2", 00:08:09.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.250 "is_configured": false, 00:08:09.250 "data_offset": 0, 00:08:09.250 "data_size": 0 00:08:09.250 } 00:08:09.250 ] 00:08:09.250 }' 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.250 20:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.820 [2024-11-26 20:21:03.147033] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.820 [2024-11-26 20:21:03.147093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.820 [2024-11-26 20:21:03.159073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.820 [2024-11-26 20:21:03.159127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.820 [2024-11-26 20:21:03.159137] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.820 [2024-11-26 20:21:03.159162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.820 [2024-11-26 20:21:03.185705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.820 BaseBdev1 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.820 [ 00:08:09.820 { 00:08:09.820 "name": "BaseBdev1", 00:08:09.820 "aliases": [ 00:08:09.820 "6b616441-e8f4-42c7-92c0-a958d6731b17" 00:08:09.820 ], 00:08:09.820 "product_name": "Malloc disk", 00:08:09.820 "block_size": 512, 00:08:09.820 "num_blocks": 65536, 00:08:09.820 "uuid": "6b616441-e8f4-42c7-92c0-a958d6731b17", 00:08:09.820 "assigned_rate_limits": { 00:08:09.820 "rw_ios_per_sec": 0, 00:08:09.820 "rw_mbytes_per_sec": 0, 00:08:09.820 "r_mbytes_per_sec": 0, 00:08:09.820 "w_mbytes_per_sec": 0 00:08:09.820 }, 00:08:09.820 "claimed": true, 00:08:09.820 "claim_type": "exclusive_write", 00:08:09.820 "zoned": false, 00:08:09.820 "supported_io_types": { 00:08:09.820 "read": true, 00:08:09.820 "write": true, 00:08:09.820 "unmap": true, 00:08:09.820 "flush": true, 00:08:09.820 "reset": true, 00:08:09.820 "nvme_admin": false, 00:08:09.820 "nvme_io": false, 00:08:09.820 "nvme_io_md": false, 00:08:09.820 "write_zeroes": true, 00:08:09.820 "zcopy": true, 00:08:09.820 "get_zone_info": false, 00:08:09.820 "zone_management": false, 00:08:09.820 "zone_append": false, 00:08:09.820 "compare": false, 00:08:09.820 "compare_and_write": false, 00:08:09.820 "abort": true, 00:08:09.820 "seek_hole": false, 00:08:09.820 "seek_data": false, 00:08:09.820 "copy": true, 00:08:09.820 "nvme_iov_md": false 00:08:09.820 }, 00:08:09.820 "memory_domains": [ 00:08:09.820 { 00:08:09.820 "dma_device_id": "system", 00:08:09.820 "dma_device_type": 1 00:08:09.820 }, 00:08:09.820 { 00:08:09.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.820 "dma_device_type": 2 00:08:09.820 } 00:08:09.820 ], 00:08:09.820 "driver_specific": {} 00:08:09.820 } 00:08:09.820 ] 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.820 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.820 "name": "Existed_Raid", 00:08:09.820 "uuid": "87afa8a6-5533-42ff-810a-59a73cbfad52", 00:08:09.820 "strip_size_kb": 64, 00:08:09.820 "state": "configuring", 00:08:09.820 "raid_level": "raid0", 00:08:09.820 "superblock": true, 00:08:09.820 "num_base_bdevs": 2, 00:08:09.820 "num_base_bdevs_discovered": 1, 00:08:09.820 "num_base_bdevs_operational": 2, 00:08:09.820 "base_bdevs_list": [ 00:08:09.820 { 00:08:09.820 "name": "BaseBdev1", 00:08:09.821 "uuid": "6b616441-e8f4-42c7-92c0-a958d6731b17", 00:08:09.821 "is_configured": true, 00:08:09.821 "data_offset": 2048, 00:08:09.821 "data_size": 63488 00:08:09.821 }, 00:08:09.821 { 00:08:09.821 "name": "BaseBdev2", 00:08:09.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.821 "is_configured": false, 00:08:09.821 "data_offset": 0, 00:08:09.821 "data_size": 0 00:08:09.821 } 00:08:09.821 ] 00:08:09.821 }' 00:08:09.821 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.821 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 [2024-11-26 20:21:03.660956] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.388 [2024-11-26 20:21:03.661035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 [2024-11-26 20:21:03.673018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.388 [2024-11-26 20:21:03.675141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.388 [2024-11-26 20:21:03.675198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.388 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.389 "name": "Existed_Raid", 00:08:10.389 "uuid": "ef6795d4-5a18-418c-a04d-8d00f716071f", 00:08:10.389 "strip_size_kb": 64, 00:08:10.389 "state": "configuring", 00:08:10.389 "raid_level": "raid0", 00:08:10.389 "superblock": true, 00:08:10.389 "num_base_bdevs": 2, 00:08:10.389 "num_base_bdevs_discovered": 1, 00:08:10.389 "num_base_bdevs_operational": 2, 00:08:10.389 "base_bdevs_list": [ 00:08:10.389 { 00:08:10.389 "name": "BaseBdev1", 00:08:10.389 "uuid": "6b616441-e8f4-42c7-92c0-a958d6731b17", 00:08:10.389 "is_configured": true, 00:08:10.389 "data_offset": 2048, 00:08:10.389 "data_size": 63488 00:08:10.389 }, 00:08:10.389 { 00:08:10.389 "name": "BaseBdev2", 00:08:10.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.389 "is_configured": false, 00:08:10.389 "data_offset": 0, 00:08:10.389 "data_size": 0 00:08:10.389 } 00:08:10.389 ] 00:08:10.389 }' 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.389 20:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.648 [2024-11-26 20:21:04.170292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.648 [2024-11-26 20:21:04.170557] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:10.648 [2024-11-26 20:21:04.170582] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.648 [2024-11-26 20:21:04.170960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:10.648 [2024-11-26 20:21:04.171133] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:10.648 [2024-11-26 20:21:04.171157] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:10.648 BaseBdev2 00:08:10.648 [2024-11-26 20:21:04.171308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.648 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.649 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.649 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.649 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.649 [ 00:08:10.649 { 00:08:10.649 "name": "BaseBdev2", 00:08:10.649 "aliases": [ 00:08:10.649 "1884f3a2-2987-4b03-b556-387465491aa1" 00:08:10.649 ], 00:08:10.649 "product_name": "Malloc disk", 00:08:10.649 "block_size": 512, 00:08:10.649 "num_blocks": 65536, 00:08:10.649 "uuid": "1884f3a2-2987-4b03-b556-387465491aa1", 00:08:10.649 "assigned_rate_limits": { 00:08:10.649 "rw_ios_per_sec": 0, 00:08:10.649 "rw_mbytes_per_sec": 0, 00:08:10.649 "r_mbytes_per_sec": 0, 00:08:10.928 "w_mbytes_per_sec": 0 00:08:10.928 }, 00:08:10.928 "claimed": true, 00:08:10.928 "claim_type": "exclusive_write", 00:08:10.928 "zoned": false, 00:08:10.928 "supported_io_types": { 00:08:10.928 "read": true, 00:08:10.928 "write": true, 00:08:10.928 "unmap": true, 00:08:10.928 "flush": true, 00:08:10.928 "reset": true, 00:08:10.928 "nvme_admin": false, 00:08:10.928 "nvme_io": false, 00:08:10.928 "nvme_io_md": false, 00:08:10.928 "write_zeroes": true, 00:08:10.928 "zcopy": true, 00:08:10.928 "get_zone_info": false, 00:08:10.928 "zone_management": false, 00:08:10.928 "zone_append": false, 00:08:10.928 "compare": false, 00:08:10.928 "compare_and_write": false, 00:08:10.928 "abort": true, 00:08:10.928 "seek_hole": false, 00:08:10.928 "seek_data": false, 00:08:10.928 "copy": true, 00:08:10.928 "nvme_iov_md": false 00:08:10.928 }, 00:08:10.928 "memory_domains": [ 00:08:10.928 { 00:08:10.928 "dma_device_id": "system", 00:08:10.928 "dma_device_type": 1 00:08:10.928 }, 00:08:10.928 { 00:08:10.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.928 "dma_device_type": 2 00:08:10.928 } 00:08:10.928 ], 00:08:10.928 "driver_specific": {} 00:08:10.928 } 00:08:10.928 ] 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.928 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.928 "name": "Existed_Raid", 00:08:10.928 "uuid": "ef6795d4-5a18-418c-a04d-8d00f716071f", 00:08:10.929 "strip_size_kb": 64, 00:08:10.929 "state": "online", 00:08:10.929 "raid_level": "raid0", 00:08:10.929 "superblock": true, 00:08:10.929 "num_base_bdevs": 2, 00:08:10.929 "num_base_bdevs_discovered": 2, 00:08:10.929 "num_base_bdevs_operational": 2, 00:08:10.929 "base_bdevs_list": [ 00:08:10.929 { 00:08:10.929 "name": "BaseBdev1", 00:08:10.929 "uuid": "6b616441-e8f4-42c7-92c0-a958d6731b17", 00:08:10.929 "is_configured": true, 00:08:10.929 "data_offset": 2048, 00:08:10.929 "data_size": 63488 00:08:10.929 }, 00:08:10.929 { 00:08:10.929 "name": "BaseBdev2", 00:08:10.929 "uuid": "1884f3a2-2987-4b03-b556-387465491aa1", 00:08:10.929 "is_configured": true, 00:08:10.929 "data_offset": 2048, 00:08:10.929 "data_size": 63488 00:08:10.929 } 00:08:10.929 ] 00:08:10.929 }' 00:08:10.929 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.929 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.187 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.188 [2024-11-26 20:21:04.641920] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.188 "name": "Existed_Raid", 00:08:11.188 "aliases": [ 00:08:11.188 "ef6795d4-5a18-418c-a04d-8d00f716071f" 00:08:11.188 ], 00:08:11.188 "product_name": "Raid Volume", 00:08:11.188 "block_size": 512, 00:08:11.188 "num_blocks": 126976, 00:08:11.188 "uuid": "ef6795d4-5a18-418c-a04d-8d00f716071f", 00:08:11.188 "assigned_rate_limits": { 00:08:11.188 "rw_ios_per_sec": 0, 00:08:11.188 "rw_mbytes_per_sec": 0, 00:08:11.188 "r_mbytes_per_sec": 0, 00:08:11.188 "w_mbytes_per_sec": 0 00:08:11.188 }, 00:08:11.188 "claimed": false, 00:08:11.188 "zoned": false, 00:08:11.188 "supported_io_types": { 00:08:11.188 "read": true, 00:08:11.188 "write": true, 00:08:11.188 "unmap": true, 00:08:11.188 "flush": true, 00:08:11.188 "reset": true, 00:08:11.188 "nvme_admin": false, 00:08:11.188 "nvme_io": false, 00:08:11.188 "nvme_io_md": false, 00:08:11.188 "write_zeroes": true, 00:08:11.188 "zcopy": false, 00:08:11.188 "get_zone_info": false, 00:08:11.188 "zone_management": false, 00:08:11.188 "zone_append": false, 00:08:11.188 "compare": false, 00:08:11.188 "compare_and_write": false, 00:08:11.188 "abort": false, 00:08:11.188 "seek_hole": false, 00:08:11.188 "seek_data": false, 00:08:11.188 "copy": false, 00:08:11.188 "nvme_iov_md": false 00:08:11.188 }, 00:08:11.188 "memory_domains": [ 00:08:11.188 { 00:08:11.188 "dma_device_id": "system", 00:08:11.188 "dma_device_type": 1 00:08:11.188 }, 00:08:11.188 { 00:08:11.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.188 "dma_device_type": 2 00:08:11.188 }, 00:08:11.188 { 00:08:11.188 "dma_device_id": "system", 00:08:11.188 "dma_device_type": 1 00:08:11.188 }, 00:08:11.188 { 00:08:11.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.188 "dma_device_type": 2 00:08:11.188 } 00:08:11.188 ], 00:08:11.188 "driver_specific": { 00:08:11.188 "raid": { 00:08:11.188 "uuid": "ef6795d4-5a18-418c-a04d-8d00f716071f", 00:08:11.188 "strip_size_kb": 64, 00:08:11.188 "state": "online", 00:08:11.188 "raid_level": "raid0", 00:08:11.188 "superblock": true, 00:08:11.188 "num_base_bdevs": 2, 00:08:11.188 "num_base_bdevs_discovered": 2, 00:08:11.188 "num_base_bdevs_operational": 2, 00:08:11.188 "base_bdevs_list": [ 00:08:11.188 { 00:08:11.188 "name": "BaseBdev1", 00:08:11.188 "uuid": "6b616441-e8f4-42c7-92c0-a958d6731b17", 00:08:11.188 "is_configured": true, 00:08:11.188 "data_offset": 2048, 00:08:11.188 "data_size": 63488 00:08:11.188 }, 00:08:11.188 { 00:08:11.188 "name": "BaseBdev2", 00:08:11.188 "uuid": "1884f3a2-2987-4b03-b556-387465491aa1", 00:08:11.188 "is_configured": true, 00:08:11.188 "data_offset": 2048, 00:08:11.188 "data_size": 63488 00:08:11.188 } 00:08:11.188 ] 00:08:11.188 } 00:08:11.188 } 00:08:11.188 }' 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.188 BaseBdev2' 00:08:11.188 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.449 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.450 [2024-11-26 20:21:04.889212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.450 [2024-11-26 20:21:04.889266] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.450 [2024-11-26 20:21:04.889328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.450 "name": "Existed_Raid", 00:08:11.450 "uuid": "ef6795d4-5a18-418c-a04d-8d00f716071f", 00:08:11.450 "strip_size_kb": 64, 00:08:11.450 "state": "offline", 00:08:11.450 "raid_level": "raid0", 00:08:11.450 "superblock": true, 00:08:11.450 "num_base_bdevs": 2, 00:08:11.450 "num_base_bdevs_discovered": 1, 00:08:11.450 "num_base_bdevs_operational": 1, 00:08:11.450 "base_bdevs_list": [ 00:08:11.450 { 00:08:11.450 "name": null, 00:08:11.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.450 "is_configured": false, 00:08:11.450 "data_offset": 0, 00:08:11.450 "data_size": 63488 00:08:11.450 }, 00:08:11.450 { 00:08:11.450 "name": "BaseBdev2", 00:08:11.450 "uuid": "1884f3a2-2987-4b03-b556-387465491aa1", 00:08:11.450 "is_configured": true, 00:08:11.450 "data_offset": 2048, 00:08:11.450 "data_size": 63488 00:08:11.450 } 00:08:11.450 ] 00:08:11.450 }' 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.450 20:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.022 [2024-11-26 20:21:05.389078] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.022 [2024-11-26 20:21:05.389147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72710 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72710 ']' 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72710 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72710 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.022 killing process with pid 72710 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72710' 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72710 00:08:12.022 [2024-11-26 20:21:05.492368] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.022 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72710 00:08:12.022 [2024-11-26 20:21:05.494000] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.589 20:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.589 00:08:12.589 real 0m4.164s 00:08:12.589 user 0m6.415s 00:08:12.589 sys 0m0.892s 00:08:12.589 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.589 20:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 ************************************ 00:08:12.589 END TEST raid_state_function_test_sb 00:08:12.589 ************************************ 00:08:12.589 20:21:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:12.589 20:21:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:12.589 20:21:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.589 20:21:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 ************************************ 00:08:12.589 START TEST raid_superblock_test 00:08:12.589 ************************************ 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72951 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72951 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72951 ']' 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.589 20:21:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.589 [2024-11-26 20:21:06.025913] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:12.589 [2024-11-26 20:21:06.026042] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72951 ] 00:08:12.847 [2024-11-26 20:21:06.172233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.847 [2024-11-26 20:21:06.254576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.847 [2024-11-26 20:21:06.329317] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.847 [2024-11-26 20:21:06.329355] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.414 malloc1 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.414 [2024-11-26 20:21:06.931744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.414 [2024-11-26 20:21:06.931838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.414 [2024-11-26 20:21:06.931864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:13.414 [2024-11-26 20:21:06.931890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.414 [2024-11-26 20:21:06.934401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.414 [2024-11-26 20:21:06.934446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.414 pt1 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:13.414 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.415 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.415 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.415 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:13.415 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.415 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.679 malloc2 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.679 [2024-11-26 20:21:06.974106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.679 [2024-11-26 20:21:06.974173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.679 [2024-11-26 20:21:06.974192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:13.679 [2024-11-26 20:21:06.974205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.679 [2024-11-26 20:21:06.976613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.679 [2024-11-26 20:21:06.976667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.679 pt2 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.679 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.679 [2024-11-26 20:21:06.986131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.679 [2024-11-26 20:21:06.988007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.679 [2024-11-26 20:21:06.988150] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:13.679 [2024-11-26 20:21:06.988166] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:13.680 [2024-11-26 20:21:06.988452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:13.680 [2024-11-26 20:21:06.988604] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:13.680 [2024-11-26 20:21:06.988633] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:13.680 [2024-11-26 20:21:06.988782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.680 20:21:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.680 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.680 "name": "raid_bdev1", 00:08:13.680 "uuid": "b96915af-dfe6-47f1-885b-c0d87f28b09e", 00:08:13.680 "strip_size_kb": 64, 00:08:13.680 "state": "online", 00:08:13.680 "raid_level": "raid0", 00:08:13.680 "superblock": true, 00:08:13.680 "num_base_bdevs": 2, 00:08:13.680 "num_base_bdevs_discovered": 2, 00:08:13.680 "num_base_bdevs_operational": 2, 00:08:13.680 "base_bdevs_list": [ 00:08:13.680 { 00:08:13.680 "name": "pt1", 00:08:13.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.680 "is_configured": true, 00:08:13.680 "data_offset": 2048, 00:08:13.680 "data_size": 63488 00:08:13.680 }, 00:08:13.680 { 00:08:13.680 "name": "pt2", 00:08:13.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.680 "is_configured": true, 00:08:13.680 "data_offset": 2048, 00:08:13.680 "data_size": 63488 00:08:13.680 } 00:08:13.680 ] 00:08:13.680 }' 00:08:13.680 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.680 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.948 [2024-11-26 20:21:07.457655] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.948 "name": "raid_bdev1", 00:08:13.948 "aliases": [ 00:08:13.948 "b96915af-dfe6-47f1-885b-c0d87f28b09e" 00:08:13.948 ], 00:08:13.948 "product_name": "Raid Volume", 00:08:13.948 "block_size": 512, 00:08:13.948 "num_blocks": 126976, 00:08:13.948 "uuid": "b96915af-dfe6-47f1-885b-c0d87f28b09e", 00:08:13.948 "assigned_rate_limits": { 00:08:13.948 "rw_ios_per_sec": 0, 00:08:13.948 "rw_mbytes_per_sec": 0, 00:08:13.948 "r_mbytes_per_sec": 0, 00:08:13.948 "w_mbytes_per_sec": 0 00:08:13.948 }, 00:08:13.948 "claimed": false, 00:08:13.948 "zoned": false, 00:08:13.948 "supported_io_types": { 00:08:13.948 "read": true, 00:08:13.948 "write": true, 00:08:13.948 "unmap": true, 00:08:13.948 "flush": true, 00:08:13.948 "reset": true, 00:08:13.948 "nvme_admin": false, 00:08:13.948 "nvme_io": false, 00:08:13.948 "nvme_io_md": false, 00:08:13.948 "write_zeroes": true, 00:08:13.948 "zcopy": false, 00:08:13.948 "get_zone_info": false, 00:08:13.948 "zone_management": false, 00:08:13.948 "zone_append": false, 00:08:13.948 "compare": false, 00:08:13.948 "compare_and_write": false, 00:08:13.948 "abort": false, 00:08:13.948 "seek_hole": false, 00:08:13.948 "seek_data": false, 00:08:13.948 "copy": false, 00:08:13.948 "nvme_iov_md": false 00:08:13.948 }, 00:08:13.948 "memory_domains": [ 00:08:13.948 { 00:08:13.948 "dma_device_id": "system", 00:08:13.948 "dma_device_type": 1 00:08:13.948 }, 00:08:13.948 { 00:08:13.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.948 "dma_device_type": 2 00:08:13.948 }, 00:08:13.948 { 00:08:13.948 "dma_device_id": "system", 00:08:13.948 "dma_device_type": 1 00:08:13.948 }, 00:08:13.948 { 00:08:13.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.948 "dma_device_type": 2 00:08:13.948 } 00:08:13.948 ], 00:08:13.948 "driver_specific": { 00:08:13.948 "raid": { 00:08:13.948 "uuid": "b96915af-dfe6-47f1-885b-c0d87f28b09e", 00:08:13.948 "strip_size_kb": 64, 00:08:13.948 "state": "online", 00:08:13.948 "raid_level": "raid0", 00:08:13.948 "superblock": true, 00:08:13.948 "num_base_bdevs": 2, 00:08:13.948 "num_base_bdevs_discovered": 2, 00:08:13.948 "num_base_bdevs_operational": 2, 00:08:13.948 "base_bdevs_list": [ 00:08:13.948 { 00:08:13.948 "name": "pt1", 00:08:13.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.948 "is_configured": true, 00:08:13.948 "data_offset": 2048, 00:08:13.948 "data_size": 63488 00:08:13.948 }, 00:08:13.948 { 00:08:13.948 "name": "pt2", 00:08:13.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.948 "is_configured": true, 00:08:13.948 "data_offset": 2048, 00:08:13.948 "data_size": 63488 00:08:13.948 } 00:08:13.948 ] 00:08:13.948 } 00:08:13.948 } 00:08:13.948 }' 00:08:13.948 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:14.207 pt2' 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.207 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.208 [2024-11-26 20:21:07.681243] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b96915af-dfe6-47f1-885b-c0d87f28b09e 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b96915af-dfe6-47f1-885b-c0d87f28b09e ']' 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.208 [2024-11-26 20:21:07.708896] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.208 [2024-11-26 20:21:07.708941] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.208 [2024-11-26 20:21:07.709058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.208 [2024-11-26 20:21:07.709122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.208 [2024-11-26 20:21:07.709140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.208 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 [2024-11-26 20:21:07.828819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:14.468 [2024-11-26 20:21:07.830901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:14.468 [2024-11-26 20:21:07.830988] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:14.468 [2024-11-26 20:21:07.831042] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:14.468 [2024-11-26 20:21:07.831060] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.468 [2024-11-26 20:21:07.831070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:14.468 request: 00:08:14.468 { 00:08:14.468 "name": "raid_bdev1", 00:08:14.468 "raid_level": "raid0", 00:08:14.468 "base_bdevs": [ 00:08:14.468 "malloc1", 00:08:14.468 "malloc2" 00:08:14.468 ], 00:08:14.468 "strip_size_kb": 64, 00:08:14.468 "superblock": false, 00:08:14.468 "method": "bdev_raid_create", 00:08:14.468 "req_id": 1 00:08:14.468 } 00:08:14.468 Got JSON-RPC error response 00:08:14.468 response: 00:08:14.468 { 00:08:14.468 "code": -17, 00:08:14.468 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:14.468 } 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 [2024-11-26 20:21:07.876669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.468 [2024-11-26 20:21:07.876741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.468 [2024-11-26 20:21:07.876762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:14.468 [2024-11-26 20:21:07.876772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.468 [2024-11-26 20:21:07.879155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.468 [2024-11-26 20:21:07.879190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.468 [2024-11-26 20:21:07.879292] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:14.468 [2024-11-26 20:21:07.879336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.468 pt1 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.468 "name": "raid_bdev1", 00:08:14.468 "uuid": "b96915af-dfe6-47f1-885b-c0d87f28b09e", 00:08:14.468 "strip_size_kb": 64, 00:08:14.468 "state": "configuring", 00:08:14.468 "raid_level": "raid0", 00:08:14.468 "superblock": true, 00:08:14.468 "num_base_bdevs": 2, 00:08:14.468 "num_base_bdevs_discovered": 1, 00:08:14.468 "num_base_bdevs_operational": 2, 00:08:14.468 "base_bdevs_list": [ 00:08:14.468 { 00:08:14.468 "name": "pt1", 00:08:14.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.468 "is_configured": true, 00:08:14.468 "data_offset": 2048, 00:08:14.468 "data_size": 63488 00:08:14.468 }, 00:08:14.468 { 00:08:14.468 "name": null, 00:08:14.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.468 "is_configured": false, 00:08:14.468 "data_offset": 2048, 00:08:14.468 "data_size": 63488 00:08:14.468 } 00:08:14.468 ] 00:08:14.468 }' 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.468 20:21:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.038 [2024-11-26 20:21:08.327949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.038 [2024-11-26 20:21:08.328039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.038 [2024-11-26 20:21:08.328067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:15.038 [2024-11-26 20:21:08.328078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.038 [2024-11-26 20:21:08.328561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.038 [2024-11-26 20:21:08.328595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.038 [2024-11-26 20:21:08.328701] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:15.038 [2024-11-26 20:21:08.328738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.038 [2024-11-26 20:21:08.328836] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:15.038 [2024-11-26 20:21:08.328851] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.038 [2024-11-26 20:21:08.329107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:15.038 [2024-11-26 20:21:08.329239] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:15.038 [2024-11-26 20:21:08.329259] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:15.038 [2024-11-26 20:21:08.329371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.038 pt2 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.038 "name": "raid_bdev1", 00:08:15.038 "uuid": "b96915af-dfe6-47f1-885b-c0d87f28b09e", 00:08:15.038 "strip_size_kb": 64, 00:08:15.038 "state": "online", 00:08:15.038 "raid_level": "raid0", 00:08:15.038 "superblock": true, 00:08:15.038 "num_base_bdevs": 2, 00:08:15.038 "num_base_bdevs_discovered": 2, 00:08:15.038 "num_base_bdevs_operational": 2, 00:08:15.038 "base_bdevs_list": [ 00:08:15.038 { 00:08:15.038 "name": "pt1", 00:08:15.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.038 "is_configured": true, 00:08:15.038 "data_offset": 2048, 00:08:15.038 "data_size": 63488 00:08:15.038 }, 00:08:15.038 { 00:08:15.038 "name": "pt2", 00:08:15.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.038 "is_configured": true, 00:08:15.038 "data_offset": 2048, 00:08:15.038 "data_size": 63488 00:08:15.038 } 00:08:15.038 ] 00:08:15.038 }' 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.038 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.298 [2024-11-26 20:21:08.811343] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.298 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.298 "name": "raid_bdev1", 00:08:15.298 "aliases": [ 00:08:15.298 "b96915af-dfe6-47f1-885b-c0d87f28b09e" 00:08:15.298 ], 00:08:15.298 "product_name": "Raid Volume", 00:08:15.298 "block_size": 512, 00:08:15.298 "num_blocks": 126976, 00:08:15.298 "uuid": "b96915af-dfe6-47f1-885b-c0d87f28b09e", 00:08:15.298 "assigned_rate_limits": { 00:08:15.298 "rw_ios_per_sec": 0, 00:08:15.298 "rw_mbytes_per_sec": 0, 00:08:15.298 "r_mbytes_per_sec": 0, 00:08:15.298 "w_mbytes_per_sec": 0 00:08:15.298 }, 00:08:15.298 "claimed": false, 00:08:15.298 "zoned": false, 00:08:15.298 "supported_io_types": { 00:08:15.298 "read": true, 00:08:15.298 "write": true, 00:08:15.298 "unmap": true, 00:08:15.298 "flush": true, 00:08:15.298 "reset": true, 00:08:15.298 "nvme_admin": false, 00:08:15.298 "nvme_io": false, 00:08:15.298 "nvme_io_md": false, 00:08:15.298 "write_zeroes": true, 00:08:15.298 "zcopy": false, 00:08:15.298 "get_zone_info": false, 00:08:15.298 "zone_management": false, 00:08:15.298 "zone_append": false, 00:08:15.298 "compare": false, 00:08:15.298 "compare_and_write": false, 00:08:15.298 "abort": false, 00:08:15.298 "seek_hole": false, 00:08:15.298 "seek_data": false, 00:08:15.298 "copy": false, 00:08:15.298 "nvme_iov_md": false 00:08:15.298 }, 00:08:15.298 "memory_domains": [ 00:08:15.298 { 00:08:15.298 "dma_device_id": "system", 00:08:15.298 "dma_device_type": 1 00:08:15.298 }, 00:08:15.298 { 00:08:15.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.298 "dma_device_type": 2 00:08:15.298 }, 00:08:15.298 { 00:08:15.298 "dma_device_id": "system", 00:08:15.298 "dma_device_type": 1 00:08:15.298 }, 00:08:15.298 { 00:08:15.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.298 "dma_device_type": 2 00:08:15.299 } 00:08:15.299 ], 00:08:15.299 "driver_specific": { 00:08:15.299 "raid": { 00:08:15.299 "uuid": "b96915af-dfe6-47f1-885b-c0d87f28b09e", 00:08:15.299 "strip_size_kb": 64, 00:08:15.299 "state": "online", 00:08:15.299 "raid_level": "raid0", 00:08:15.299 "superblock": true, 00:08:15.299 "num_base_bdevs": 2, 00:08:15.299 "num_base_bdevs_discovered": 2, 00:08:15.299 "num_base_bdevs_operational": 2, 00:08:15.299 "base_bdevs_list": [ 00:08:15.299 { 00:08:15.299 "name": "pt1", 00:08:15.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.299 "is_configured": true, 00:08:15.299 "data_offset": 2048, 00:08:15.299 "data_size": 63488 00:08:15.299 }, 00:08:15.299 { 00:08:15.299 "name": "pt2", 00:08:15.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.299 "is_configured": true, 00:08:15.299 "data_offset": 2048, 00:08:15.299 "data_size": 63488 00:08:15.299 } 00:08:15.299 ] 00:08:15.299 } 00:08:15.299 } 00:08:15.299 }' 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.558 pt2' 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.558 20:21:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.558 [2024-11-26 20:21:09.035034] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b96915af-dfe6-47f1-885b-c0d87f28b09e '!=' b96915af-dfe6-47f1-885b-c0d87f28b09e ']' 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72951 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72951 ']' 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72951 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72951 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.558 killing process with pid 72951 00:08:15.558 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72951' 00:08:15.559 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72951 00:08:15.559 [2024-11-26 20:21:09.103412] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.559 [2024-11-26 20:21:09.103511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.559 [2024-11-26 20:21:09.103563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.559 [2024-11-26 20:21:09.103574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:15.559 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72951 00:08:15.817 [2024-11-26 20:21:09.140873] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.076 20:21:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:16.076 00:08:16.076 real 0m3.579s 00:08:16.076 user 0m5.357s 00:08:16.076 sys 0m0.801s 00:08:16.076 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.076 20:21:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.076 ************************************ 00:08:16.076 END TEST raid_superblock_test 00:08:16.076 ************************************ 00:08:16.076 20:21:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:16.076 20:21:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.076 20:21:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.076 20:21:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.076 ************************************ 00:08:16.076 START TEST raid_read_error_test 00:08:16.076 ************************************ 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.076 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DE2EAUiAYs 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73152 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73152 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73152 ']' 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.077 20:21:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.337 [2024-11-26 20:21:09.671006] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:16.337 [2024-11-26 20:21:09.671160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73152 ] 00:08:16.337 [2024-11-26 20:21:09.831538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.597 [2024-11-26 20:21:09.911573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.597 [2024-11-26 20:21:09.984317] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.597 [2024-11-26 20:21:09.984359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 BaseBdev1_malloc 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 true 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 [2024-11-26 20:21:10.568435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.168 [2024-11-26 20:21:10.568498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.168 [2024-11-26 20:21:10.568545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:17.168 [2024-11-26 20:21:10.568556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.168 [2024-11-26 20:21:10.570843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.168 [2024-11-26 20:21:10.570894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.168 BaseBdev1 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 BaseBdev2_malloc 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 true 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 [2024-11-26 20:21:10.625811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.168 [2024-11-26 20:21:10.625877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.168 [2024-11-26 20:21:10.625915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.168 [2024-11-26 20:21:10.625924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.168 [2024-11-26 20:21:10.628280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.168 [2024-11-26 20:21:10.628323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.168 BaseBdev2 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.168 [2024-11-26 20:21:10.637840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.168 [2024-11-26 20:21:10.639758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.168 [2024-11-26 20:21:10.639990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:17.168 [2024-11-26 20:21:10.640006] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:17.168 [2024-11-26 20:21:10.640298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:17.168 [2024-11-26 20:21:10.640484] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:17.168 [2024-11-26 20:21:10.640506] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:17.168 [2024-11-26 20:21:10.640692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.168 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.169 "name": "raid_bdev1", 00:08:17.169 "uuid": "043d41ee-8ca5-4eea-bc60-f8baedc5f366", 00:08:17.169 "strip_size_kb": 64, 00:08:17.169 "state": "online", 00:08:17.169 "raid_level": "raid0", 00:08:17.169 "superblock": true, 00:08:17.169 "num_base_bdevs": 2, 00:08:17.169 "num_base_bdevs_discovered": 2, 00:08:17.169 "num_base_bdevs_operational": 2, 00:08:17.169 "base_bdevs_list": [ 00:08:17.169 { 00:08:17.169 "name": "BaseBdev1", 00:08:17.169 "uuid": "1e384b85-633d-5543-86ea-18fc4b64fae4", 00:08:17.169 "is_configured": true, 00:08:17.169 "data_offset": 2048, 00:08:17.169 "data_size": 63488 00:08:17.169 }, 00:08:17.169 { 00:08:17.169 "name": "BaseBdev2", 00:08:17.169 "uuid": "13b31683-6f4e-5d85-b04c-2fa0e9d07bd2", 00:08:17.169 "is_configured": true, 00:08:17.169 "data_offset": 2048, 00:08:17.169 "data_size": 63488 00:08:17.169 } 00:08:17.169 ] 00:08:17.169 }' 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.169 20:21:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.734 20:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.734 20:21:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.734 [2024-11-26 20:21:11.201325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.703 "name": "raid_bdev1", 00:08:18.703 "uuid": "043d41ee-8ca5-4eea-bc60-f8baedc5f366", 00:08:18.703 "strip_size_kb": 64, 00:08:18.703 "state": "online", 00:08:18.703 "raid_level": "raid0", 00:08:18.703 "superblock": true, 00:08:18.703 "num_base_bdevs": 2, 00:08:18.703 "num_base_bdevs_discovered": 2, 00:08:18.703 "num_base_bdevs_operational": 2, 00:08:18.703 "base_bdevs_list": [ 00:08:18.703 { 00:08:18.703 "name": "BaseBdev1", 00:08:18.703 "uuid": "1e384b85-633d-5543-86ea-18fc4b64fae4", 00:08:18.703 "is_configured": true, 00:08:18.703 "data_offset": 2048, 00:08:18.703 "data_size": 63488 00:08:18.703 }, 00:08:18.703 { 00:08:18.703 "name": "BaseBdev2", 00:08:18.703 "uuid": "13b31683-6f4e-5d85-b04c-2fa0e9d07bd2", 00:08:18.703 "is_configured": true, 00:08:18.703 "data_offset": 2048, 00:08:18.703 "data_size": 63488 00:08:18.703 } 00:08:18.703 ] 00:08:18.703 }' 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.703 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.268 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.268 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.268 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.268 [2024-11-26 20:21:12.558091] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.269 [2024-11-26 20:21:12.558192] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.269 [2024-11-26 20:21:12.561114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.269 [2024-11-26 20:21:12.561203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.269 [2024-11-26 20:21:12.561269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.269 [2024-11-26 20:21:12.561318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:19.269 { 00:08:19.269 "results": [ 00:08:19.269 { 00:08:19.269 "job": "raid_bdev1", 00:08:19.269 "core_mask": "0x1", 00:08:19.269 "workload": "randrw", 00:08:19.269 "percentage": 50, 00:08:19.269 "status": "finished", 00:08:19.269 "queue_depth": 1, 00:08:19.269 "io_size": 131072, 00:08:19.269 "runtime": 1.3576, 00:08:19.269 "iops": 14397.466116676487, 00:08:19.269 "mibps": 1799.683264584561, 00:08:19.269 "io_failed": 1, 00:08:19.269 "io_timeout": 0, 00:08:19.269 "avg_latency_us": 98.29757223827107, 00:08:19.269 "min_latency_us": 26.047161572052403, 00:08:19.269 "max_latency_us": 1480.9991266375546 00:08:19.269 } 00:08:19.269 ], 00:08:19.269 "core_count": 1 00:08:19.269 } 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73152 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73152 ']' 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73152 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73152 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73152' 00:08:19.269 killing process with pid 73152 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73152 00:08:19.269 [2024-11-26 20:21:12.608590] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.269 20:21:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73152 00:08:19.269 [2024-11-26 20:21:12.632806] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.528 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:19.528 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DE2EAUiAYs 00:08:19.528 20:21:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:19.528 ************************************ 00:08:19.528 END TEST raid_read_error_test 00:08:19.528 ************************************ 00:08:19.528 20:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:19.528 20:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:19.528 20:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.528 20:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:19.528 20:21:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:19.528 00:08:19.528 real 0m3.440s 00:08:19.528 user 0m4.272s 00:08:19.528 sys 0m0.607s 00:08:19.528 20:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.528 20:21:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.528 20:21:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:19.528 20:21:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:19.528 20:21:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.528 20:21:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.786 ************************************ 00:08:19.786 START TEST raid_write_error_test 00:08:19.786 ************************************ 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:19.786 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3Q6pv0EpP6 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73286 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73286 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73286 ']' 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.787 20:21:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.787 [2024-11-26 20:21:13.183092] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.787 [2024-11-26 20:21:13.183330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:08:20.045 [2024-11-26 20:21:13.344664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.045 [2024-11-26 20:21:13.430750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.045 [2024-11-26 20:21:13.511162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.045 [2024-11-26 20:21:13.511289] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 BaseBdev1_malloc 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 true 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 [2024-11-26 20:21:14.084098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:20.612 [2024-11-26 20:21:14.084154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.612 [2024-11-26 20:21:14.084173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:20.612 [2024-11-26 20:21:14.084182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.612 [2024-11-26 20:21:14.086414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.612 [2024-11-26 20:21:14.086461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:20.612 BaseBdev1 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 BaseBdev2_malloc 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 true 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 [2024-11-26 20:21:14.137491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:20.612 [2024-11-26 20:21:14.137548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.612 [2024-11-26 20:21:14.137570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:20.612 [2024-11-26 20:21:14.137579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.612 [2024-11-26 20:21:14.139947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.612 [2024-11-26 20:21:14.140038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:20.612 BaseBdev2 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.612 [2024-11-26 20:21:14.149531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.612 [2024-11-26 20:21:14.151644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.612 [2024-11-26 20:21:14.151830] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:20.612 [2024-11-26 20:21:14.151862] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:20.612 [2024-11-26 20:21:14.152167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:20.612 [2024-11-26 20:21:14.152328] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:20.612 [2024-11-26 20:21:14.152343] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:20.612 [2024-11-26 20:21:14.152499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.612 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.613 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.613 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.872 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.872 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.872 "name": "raid_bdev1", 00:08:20.872 "uuid": "2b8eb814-ba92-4b6c-8fe2-9d16e41350c9", 00:08:20.872 "strip_size_kb": 64, 00:08:20.872 "state": "online", 00:08:20.872 "raid_level": "raid0", 00:08:20.872 "superblock": true, 00:08:20.872 "num_base_bdevs": 2, 00:08:20.872 "num_base_bdevs_discovered": 2, 00:08:20.872 "num_base_bdevs_operational": 2, 00:08:20.872 "base_bdevs_list": [ 00:08:20.872 { 00:08:20.872 "name": "BaseBdev1", 00:08:20.872 "uuid": "44c8f1e9-a840-539e-bc3b-e4956dd9fe5d", 00:08:20.872 "is_configured": true, 00:08:20.872 "data_offset": 2048, 00:08:20.872 "data_size": 63488 00:08:20.872 }, 00:08:20.872 { 00:08:20.872 "name": "BaseBdev2", 00:08:20.872 "uuid": "9e7ad814-840e-56d2-8d51-42fcbc6f21eb", 00:08:20.872 "is_configured": true, 00:08:20.872 "data_offset": 2048, 00:08:20.872 "data_size": 63488 00:08:20.872 } 00:08:20.872 ] 00:08:20.872 }' 00:08:20.872 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.872 20:21:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.131 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:21.132 20:21:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:21.132 [2024-11-26 20:21:14.681082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.070 20:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.329 20:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.329 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.329 "name": "raid_bdev1", 00:08:22.329 "uuid": "2b8eb814-ba92-4b6c-8fe2-9d16e41350c9", 00:08:22.329 "strip_size_kb": 64, 00:08:22.329 "state": "online", 00:08:22.329 "raid_level": "raid0", 00:08:22.329 "superblock": true, 00:08:22.329 "num_base_bdevs": 2, 00:08:22.329 "num_base_bdevs_discovered": 2, 00:08:22.329 "num_base_bdevs_operational": 2, 00:08:22.329 "base_bdevs_list": [ 00:08:22.329 { 00:08:22.329 "name": "BaseBdev1", 00:08:22.329 "uuid": "44c8f1e9-a840-539e-bc3b-e4956dd9fe5d", 00:08:22.329 "is_configured": true, 00:08:22.329 "data_offset": 2048, 00:08:22.329 "data_size": 63488 00:08:22.329 }, 00:08:22.329 { 00:08:22.329 "name": "BaseBdev2", 00:08:22.329 "uuid": "9e7ad814-840e-56d2-8d51-42fcbc6f21eb", 00:08:22.329 "is_configured": true, 00:08:22.329 "data_offset": 2048, 00:08:22.329 "data_size": 63488 00:08:22.329 } 00:08:22.329 ] 00:08:22.329 }' 00:08:22.329 20:21:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.329 20:21:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.602 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.602 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.602 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.602 [2024-11-26 20:21:16.054594] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.602 [2024-11-26 20:21:16.054639] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.602 [2024-11-26 20:21:16.057516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.602 [2024-11-26 20:21:16.057635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.603 [2024-11-26 20:21:16.057680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.603 [2024-11-26 20:21:16.057691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:22.603 { 00:08:22.603 "results": [ 00:08:22.603 { 00:08:22.603 "job": "raid_bdev1", 00:08:22.603 "core_mask": "0x1", 00:08:22.603 "workload": "randrw", 00:08:22.603 "percentage": 50, 00:08:22.603 "status": "finished", 00:08:22.603 "queue_depth": 1, 00:08:22.603 "io_size": 131072, 00:08:22.603 "runtime": 1.374278, 00:08:22.603 "iops": 13536.562471348592, 00:08:22.603 "mibps": 1692.070308918574, 00:08:22.603 "io_failed": 1, 00:08:22.603 "io_timeout": 0, 00:08:22.603 "avg_latency_us": 101.87954433427002, 00:08:22.603 "min_latency_us": 27.72401746724891, 00:08:22.603 "max_latency_us": 1695.6366812227075 00:08:22.603 } 00:08:22.603 ], 00:08:22.603 "core_count": 1 00:08:22.603 } 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73286 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73286 ']' 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73286 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73286 00:08:22.603 killing process with pid 73286 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73286' 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73286 00:08:22.603 [2024-11-26 20:21:16.101894] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.603 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73286 00:08:22.603 [2024-11-26 20:21:16.129636] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3Q6pv0EpP6 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:23.172 ************************************ 00:08:23.172 END TEST raid_write_error_test 00:08:23.172 ************************************ 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:23.172 00:08:23.172 real 0m3.424s 00:08:23.172 user 0m4.260s 00:08:23.172 sys 0m0.592s 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.172 20:21:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.172 20:21:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:23.172 20:21:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:23.172 20:21:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:23.172 20:21:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.172 20:21:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.172 ************************************ 00:08:23.172 START TEST raid_state_function_test 00:08:23.172 ************************************ 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:23.172 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73413 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73413' 00:08:23.173 Process raid pid: 73413 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73413 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73413 ']' 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.173 20:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.173 [2024-11-26 20:21:16.668352] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.173 [2024-11-26 20:21:16.668579] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.432 [2024-11-26 20:21:16.829408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.432 [2024-11-26 20:21:16.908750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.432 [2024-11-26 20:21:16.981123] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.432 [2024-11-26 20:21:16.981158] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.002 [2024-11-26 20:21:17.540681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.002 [2024-11-26 20:21:17.540828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.002 [2024-11-26 20:21:17.540847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.002 [2024-11-26 20:21:17.540860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.002 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.262 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.262 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.262 "name": "Existed_Raid", 00:08:24.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.262 "strip_size_kb": 64, 00:08:24.262 "state": "configuring", 00:08:24.262 "raid_level": "concat", 00:08:24.262 "superblock": false, 00:08:24.262 "num_base_bdevs": 2, 00:08:24.262 "num_base_bdevs_discovered": 0, 00:08:24.262 "num_base_bdevs_operational": 2, 00:08:24.262 "base_bdevs_list": [ 00:08:24.262 { 00:08:24.262 "name": "BaseBdev1", 00:08:24.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.262 "is_configured": false, 00:08:24.262 "data_offset": 0, 00:08:24.262 "data_size": 0 00:08:24.262 }, 00:08:24.262 { 00:08:24.262 "name": "BaseBdev2", 00:08:24.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.262 "is_configured": false, 00:08:24.262 "data_offset": 0, 00:08:24.262 "data_size": 0 00:08:24.262 } 00:08:24.262 ] 00:08:24.262 }' 00:08:24.262 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.262 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.522 [2024-11-26 20:21:17.943907] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.522 [2024-11-26 20:21:17.943960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.522 [2024-11-26 20:21:17.955920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.522 [2024-11-26 20:21:17.956017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.522 [2024-11-26 20:21:17.956053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.522 [2024-11-26 20:21:17.956081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.522 [2024-11-26 20:21:17.978620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.522 BaseBdev1 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.522 20:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.522 [ 00:08:24.522 { 00:08:24.522 "name": "BaseBdev1", 00:08:24.522 "aliases": [ 00:08:24.522 "95795f33-ed2d-40ab-9bb8-52c19f6057f4" 00:08:24.522 ], 00:08:24.522 "product_name": "Malloc disk", 00:08:24.522 "block_size": 512, 00:08:24.522 "num_blocks": 65536, 00:08:24.522 "uuid": "95795f33-ed2d-40ab-9bb8-52c19f6057f4", 00:08:24.522 "assigned_rate_limits": { 00:08:24.522 "rw_ios_per_sec": 0, 00:08:24.522 "rw_mbytes_per_sec": 0, 00:08:24.522 "r_mbytes_per_sec": 0, 00:08:24.522 "w_mbytes_per_sec": 0 00:08:24.522 }, 00:08:24.522 "claimed": true, 00:08:24.522 "claim_type": "exclusive_write", 00:08:24.522 "zoned": false, 00:08:24.522 "supported_io_types": { 00:08:24.522 "read": true, 00:08:24.522 "write": true, 00:08:24.522 "unmap": true, 00:08:24.522 "flush": true, 00:08:24.522 "reset": true, 00:08:24.522 "nvme_admin": false, 00:08:24.522 "nvme_io": false, 00:08:24.522 "nvme_io_md": false, 00:08:24.522 "write_zeroes": true, 00:08:24.522 "zcopy": true, 00:08:24.522 "get_zone_info": false, 00:08:24.522 "zone_management": false, 00:08:24.522 "zone_append": false, 00:08:24.522 "compare": false, 00:08:24.522 "compare_and_write": false, 00:08:24.522 "abort": true, 00:08:24.522 "seek_hole": false, 00:08:24.522 "seek_data": false, 00:08:24.522 "copy": true, 00:08:24.522 "nvme_iov_md": false 00:08:24.522 }, 00:08:24.522 "memory_domains": [ 00:08:24.522 { 00:08:24.522 "dma_device_id": "system", 00:08:24.522 "dma_device_type": 1 00:08:24.522 }, 00:08:24.522 { 00:08:24.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.522 "dma_device_type": 2 00:08:24.522 } 00:08:24.522 ], 00:08:24.522 "driver_specific": {} 00:08:24.522 } 00:08:24.522 ] 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.523 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.782 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.782 "name": "Existed_Raid", 00:08:24.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.782 "strip_size_kb": 64, 00:08:24.782 "state": "configuring", 00:08:24.782 "raid_level": "concat", 00:08:24.782 "superblock": false, 00:08:24.782 "num_base_bdevs": 2, 00:08:24.782 "num_base_bdevs_discovered": 1, 00:08:24.782 "num_base_bdevs_operational": 2, 00:08:24.782 "base_bdevs_list": [ 00:08:24.782 { 00:08:24.782 "name": "BaseBdev1", 00:08:24.782 "uuid": "95795f33-ed2d-40ab-9bb8-52c19f6057f4", 00:08:24.782 "is_configured": true, 00:08:24.782 "data_offset": 0, 00:08:24.782 "data_size": 65536 00:08:24.782 }, 00:08:24.782 { 00:08:24.782 "name": "BaseBdev2", 00:08:24.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.782 "is_configured": false, 00:08:24.782 "data_offset": 0, 00:08:24.782 "data_size": 0 00:08:24.782 } 00:08:24.782 ] 00:08:24.782 }' 00:08:24.782 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.782 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.041 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.041 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.041 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.041 [2024-11-26 20:21:18.429909] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.041 [2024-11-26 20:21:18.429968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:25.041 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.041 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.042 [2024-11-26 20:21:18.441930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.042 [2024-11-26 20:21:18.444025] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.042 [2024-11-26 20:21:18.444112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.042 "name": "Existed_Raid", 00:08:25.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.042 "strip_size_kb": 64, 00:08:25.042 "state": "configuring", 00:08:25.042 "raid_level": "concat", 00:08:25.042 "superblock": false, 00:08:25.042 "num_base_bdevs": 2, 00:08:25.042 "num_base_bdevs_discovered": 1, 00:08:25.042 "num_base_bdevs_operational": 2, 00:08:25.042 "base_bdevs_list": [ 00:08:25.042 { 00:08:25.042 "name": "BaseBdev1", 00:08:25.042 "uuid": "95795f33-ed2d-40ab-9bb8-52c19f6057f4", 00:08:25.042 "is_configured": true, 00:08:25.042 "data_offset": 0, 00:08:25.042 "data_size": 65536 00:08:25.042 }, 00:08:25.042 { 00:08:25.042 "name": "BaseBdev2", 00:08:25.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.042 "is_configured": false, 00:08:25.042 "data_offset": 0, 00:08:25.042 "data_size": 0 00:08:25.042 } 00:08:25.042 ] 00:08:25.042 }' 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.042 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.300 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.300 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.300 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.300 [2024-11-26 20:21:18.848971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.300 [2024-11-26 20:21:18.849039] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:25.300 [2024-11-26 20:21:18.849053] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:25.300 [2024-11-26 20:21:18.849415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:25.300 [2024-11-26 20:21:18.849586] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:25.300 [2024-11-26 20:21:18.849606] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:25.300 [2024-11-26 20:21:18.849906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.560 BaseBdev2 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.560 [ 00:08:25.560 { 00:08:25.560 "name": "BaseBdev2", 00:08:25.560 "aliases": [ 00:08:25.560 "6aa09d69-615f-49fb-a306-621008c16f78" 00:08:25.560 ], 00:08:25.560 "product_name": "Malloc disk", 00:08:25.560 "block_size": 512, 00:08:25.560 "num_blocks": 65536, 00:08:25.560 "uuid": "6aa09d69-615f-49fb-a306-621008c16f78", 00:08:25.560 "assigned_rate_limits": { 00:08:25.560 "rw_ios_per_sec": 0, 00:08:25.560 "rw_mbytes_per_sec": 0, 00:08:25.560 "r_mbytes_per_sec": 0, 00:08:25.560 "w_mbytes_per_sec": 0 00:08:25.560 }, 00:08:25.560 "claimed": true, 00:08:25.560 "claim_type": "exclusive_write", 00:08:25.560 "zoned": false, 00:08:25.560 "supported_io_types": { 00:08:25.560 "read": true, 00:08:25.560 "write": true, 00:08:25.560 "unmap": true, 00:08:25.560 "flush": true, 00:08:25.560 "reset": true, 00:08:25.560 "nvme_admin": false, 00:08:25.560 "nvme_io": false, 00:08:25.560 "nvme_io_md": false, 00:08:25.560 "write_zeroes": true, 00:08:25.560 "zcopy": true, 00:08:25.560 "get_zone_info": false, 00:08:25.560 "zone_management": false, 00:08:25.560 "zone_append": false, 00:08:25.560 "compare": false, 00:08:25.560 "compare_and_write": false, 00:08:25.560 "abort": true, 00:08:25.560 "seek_hole": false, 00:08:25.560 "seek_data": false, 00:08:25.560 "copy": true, 00:08:25.560 "nvme_iov_md": false 00:08:25.560 }, 00:08:25.560 "memory_domains": [ 00:08:25.560 { 00:08:25.560 "dma_device_id": "system", 00:08:25.560 "dma_device_type": 1 00:08:25.560 }, 00:08:25.560 { 00:08:25.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.560 "dma_device_type": 2 00:08:25.560 } 00:08:25.560 ], 00:08:25.560 "driver_specific": {} 00:08:25.560 } 00:08:25.560 ] 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.560 "name": "Existed_Raid", 00:08:25.560 "uuid": "9dc4f259-27bb-4322-9188-e0674076fb7d", 00:08:25.560 "strip_size_kb": 64, 00:08:25.560 "state": "online", 00:08:25.560 "raid_level": "concat", 00:08:25.560 "superblock": false, 00:08:25.560 "num_base_bdevs": 2, 00:08:25.560 "num_base_bdevs_discovered": 2, 00:08:25.560 "num_base_bdevs_operational": 2, 00:08:25.560 "base_bdevs_list": [ 00:08:25.560 { 00:08:25.560 "name": "BaseBdev1", 00:08:25.560 "uuid": "95795f33-ed2d-40ab-9bb8-52c19f6057f4", 00:08:25.560 "is_configured": true, 00:08:25.560 "data_offset": 0, 00:08:25.560 "data_size": 65536 00:08:25.560 }, 00:08:25.560 { 00:08:25.560 "name": "BaseBdev2", 00:08:25.560 "uuid": "6aa09d69-615f-49fb-a306-621008c16f78", 00:08:25.560 "is_configured": true, 00:08:25.560 "data_offset": 0, 00:08:25.560 "data_size": 65536 00:08:25.560 } 00:08:25.560 ] 00:08:25.560 }' 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.560 20:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.821 [2024-11-26 20:21:19.288604] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.821 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.821 "name": "Existed_Raid", 00:08:25.821 "aliases": [ 00:08:25.821 "9dc4f259-27bb-4322-9188-e0674076fb7d" 00:08:25.821 ], 00:08:25.821 "product_name": "Raid Volume", 00:08:25.821 "block_size": 512, 00:08:25.821 "num_blocks": 131072, 00:08:25.821 "uuid": "9dc4f259-27bb-4322-9188-e0674076fb7d", 00:08:25.821 "assigned_rate_limits": { 00:08:25.821 "rw_ios_per_sec": 0, 00:08:25.821 "rw_mbytes_per_sec": 0, 00:08:25.821 "r_mbytes_per_sec": 0, 00:08:25.821 "w_mbytes_per_sec": 0 00:08:25.821 }, 00:08:25.821 "claimed": false, 00:08:25.821 "zoned": false, 00:08:25.821 "supported_io_types": { 00:08:25.821 "read": true, 00:08:25.821 "write": true, 00:08:25.821 "unmap": true, 00:08:25.821 "flush": true, 00:08:25.821 "reset": true, 00:08:25.821 "nvme_admin": false, 00:08:25.821 "nvme_io": false, 00:08:25.821 "nvme_io_md": false, 00:08:25.821 "write_zeroes": true, 00:08:25.821 "zcopy": false, 00:08:25.821 "get_zone_info": false, 00:08:25.821 "zone_management": false, 00:08:25.821 "zone_append": false, 00:08:25.821 "compare": false, 00:08:25.821 "compare_and_write": false, 00:08:25.821 "abort": false, 00:08:25.821 "seek_hole": false, 00:08:25.821 "seek_data": false, 00:08:25.821 "copy": false, 00:08:25.821 "nvme_iov_md": false 00:08:25.821 }, 00:08:25.821 "memory_domains": [ 00:08:25.821 { 00:08:25.821 "dma_device_id": "system", 00:08:25.821 "dma_device_type": 1 00:08:25.821 }, 00:08:25.821 { 00:08:25.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.821 "dma_device_type": 2 00:08:25.821 }, 00:08:25.821 { 00:08:25.821 "dma_device_id": "system", 00:08:25.821 "dma_device_type": 1 00:08:25.821 }, 00:08:25.821 { 00:08:25.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.821 "dma_device_type": 2 00:08:25.821 } 00:08:25.821 ], 00:08:25.821 "driver_specific": { 00:08:25.821 "raid": { 00:08:25.821 "uuid": "9dc4f259-27bb-4322-9188-e0674076fb7d", 00:08:25.821 "strip_size_kb": 64, 00:08:25.821 "state": "online", 00:08:25.821 "raid_level": "concat", 00:08:25.821 "superblock": false, 00:08:25.821 "num_base_bdevs": 2, 00:08:25.821 "num_base_bdevs_discovered": 2, 00:08:25.821 "num_base_bdevs_operational": 2, 00:08:25.821 "base_bdevs_list": [ 00:08:25.821 { 00:08:25.821 "name": "BaseBdev1", 00:08:25.821 "uuid": "95795f33-ed2d-40ab-9bb8-52c19f6057f4", 00:08:25.821 "is_configured": true, 00:08:25.821 "data_offset": 0, 00:08:25.822 "data_size": 65536 00:08:25.822 }, 00:08:25.822 { 00:08:25.822 "name": "BaseBdev2", 00:08:25.822 "uuid": "6aa09d69-615f-49fb-a306-621008c16f78", 00:08:25.822 "is_configured": true, 00:08:25.822 "data_offset": 0, 00:08:25.822 "data_size": 65536 00:08:25.822 } 00:08:25.822 ] 00:08:25.822 } 00:08:25.822 } 00:08:25.822 }' 00:08:25.822 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.082 BaseBdev2' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.082 [2024-11-26 20:21:19.532069] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.082 [2024-11-26 20:21:19.532114] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.082 [2024-11-26 20:21:19.532176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.082 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.083 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.083 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.083 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.083 "name": "Existed_Raid", 00:08:26.083 "uuid": "9dc4f259-27bb-4322-9188-e0674076fb7d", 00:08:26.083 "strip_size_kb": 64, 00:08:26.083 "state": "offline", 00:08:26.083 "raid_level": "concat", 00:08:26.083 "superblock": false, 00:08:26.083 "num_base_bdevs": 2, 00:08:26.083 "num_base_bdevs_discovered": 1, 00:08:26.083 "num_base_bdevs_operational": 1, 00:08:26.083 "base_bdevs_list": [ 00:08:26.083 { 00:08:26.083 "name": null, 00:08:26.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.083 "is_configured": false, 00:08:26.083 "data_offset": 0, 00:08:26.083 "data_size": 65536 00:08:26.083 }, 00:08:26.083 { 00:08:26.083 "name": "BaseBdev2", 00:08:26.083 "uuid": "6aa09d69-615f-49fb-a306-621008c16f78", 00:08:26.083 "is_configured": true, 00:08:26.083 "data_offset": 0, 00:08:26.083 "data_size": 65536 00:08:26.083 } 00:08:26.083 ] 00:08:26.083 }' 00:08:26.083 20:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.083 20:21:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.650 [2024-11-26 20:21:20.089728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.650 [2024-11-26 20:21:20.089786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73413 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73413 ']' 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73413 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.650 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73413 00:08:26.909 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.909 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.909 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73413' 00:08:26.909 killing process with pid 73413 00:08:26.909 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73413 00:08:26.909 [2024-11-26 20:21:20.209499] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.909 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73413 00:08:26.909 [2024-11-26 20:21:20.211162] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:27.167 00:08:27.167 real 0m4.016s 00:08:27.167 user 0m6.151s 00:08:27.167 sys 0m0.836s 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.167 ************************************ 00:08:27.167 END TEST raid_state_function_test 00:08:27.167 ************************************ 00:08:27.167 20:21:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:27.167 20:21:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:27.167 20:21:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.167 20:21:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.167 ************************************ 00:08:27.167 START TEST raid_state_function_test_sb 00:08:27.167 ************************************ 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.167 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:27.168 Process raid pid: 73655 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73655 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73655' 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73655 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73655 ']' 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.168 20:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.426 [2024-11-26 20:21:20.758592] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:27.426 [2024-11-26 20:21:20.758820] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.426 [2024-11-26 20:21:20.923483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.684 [2024-11-26 20:21:21.029778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.684 [2024-11-26 20:21:21.104678] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.684 [2024-11-26 20:21:21.104715] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.249 [2024-11-26 20:21:21.625407] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.249 [2024-11-26 20:21:21.625523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.249 [2024-11-26 20:21:21.625543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.249 [2024-11-26 20:21:21.625556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.249 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.249 "name": "Existed_Raid", 00:08:28.249 "uuid": "b9ff1dba-8b5b-48ad-83f6-5512e6cd4494", 00:08:28.249 "strip_size_kb": 64, 00:08:28.249 "state": "configuring", 00:08:28.250 "raid_level": "concat", 00:08:28.250 "superblock": true, 00:08:28.250 "num_base_bdevs": 2, 00:08:28.250 "num_base_bdevs_discovered": 0, 00:08:28.250 "num_base_bdevs_operational": 2, 00:08:28.250 "base_bdevs_list": [ 00:08:28.250 { 00:08:28.250 "name": "BaseBdev1", 00:08:28.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.250 "is_configured": false, 00:08:28.250 "data_offset": 0, 00:08:28.250 "data_size": 0 00:08:28.250 }, 00:08:28.250 { 00:08:28.250 "name": "BaseBdev2", 00:08:28.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.250 "is_configured": false, 00:08:28.250 "data_offset": 0, 00:08:28.250 "data_size": 0 00:08:28.250 } 00:08:28.250 ] 00:08:28.250 }' 00:08:28.250 20:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.250 20:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.537 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.537 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.537 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.537 [2024-11-26 20:21:22.064566] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.537 [2024-11-26 20:21:22.064698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:28.537 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.537 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.537 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.537 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.800 [2024-11-26 20:21:22.076604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.800 [2024-11-26 20:21:22.076712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.800 [2024-11-26 20:21:22.076728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.800 [2024-11-26 20:21:22.076739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.800 [2024-11-26 20:21:22.103202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.800 BaseBdev1 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.800 [ 00:08:28.800 { 00:08:28.800 "name": "BaseBdev1", 00:08:28.800 "aliases": [ 00:08:28.800 "da35b758-c3e8-4f13-9144-e23f6a57c722" 00:08:28.800 ], 00:08:28.800 "product_name": "Malloc disk", 00:08:28.800 "block_size": 512, 00:08:28.800 "num_blocks": 65536, 00:08:28.800 "uuid": "da35b758-c3e8-4f13-9144-e23f6a57c722", 00:08:28.800 "assigned_rate_limits": { 00:08:28.800 "rw_ios_per_sec": 0, 00:08:28.800 "rw_mbytes_per_sec": 0, 00:08:28.800 "r_mbytes_per_sec": 0, 00:08:28.800 "w_mbytes_per_sec": 0 00:08:28.800 }, 00:08:28.800 "claimed": true, 00:08:28.800 "claim_type": "exclusive_write", 00:08:28.800 "zoned": false, 00:08:28.800 "supported_io_types": { 00:08:28.800 "read": true, 00:08:28.800 "write": true, 00:08:28.800 "unmap": true, 00:08:28.800 "flush": true, 00:08:28.800 "reset": true, 00:08:28.800 "nvme_admin": false, 00:08:28.800 "nvme_io": false, 00:08:28.800 "nvme_io_md": false, 00:08:28.800 "write_zeroes": true, 00:08:28.800 "zcopy": true, 00:08:28.800 "get_zone_info": false, 00:08:28.800 "zone_management": false, 00:08:28.800 "zone_append": false, 00:08:28.800 "compare": false, 00:08:28.800 "compare_and_write": false, 00:08:28.800 "abort": true, 00:08:28.800 "seek_hole": false, 00:08:28.800 "seek_data": false, 00:08:28.800 "copy": true, 00:08:28.800 "nvme_iov_md": false 00:08:28.800 }, 00:08:28.800 "memory_domains": [ 00:08:28.800 { 00:08:28.800 "dma_device_id": "system", 00:08:28.800 "dma_device_type": 1 00:08:28.800 }, 00:08:28.800 { 00:08:28.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.800 "dma_device_type": 2 00:08:28.800 } 00:08:28.800 ], 00:08:28.800 "driver_specific": {} 00:08:28.800 } 00:08:28.800 ] 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.800 "name": "Existed_Raid", 00:08:28.800 "uuid": "46932a5a-d6e1-40f2-bb61-8ced7974b459", 00:08:28.800 "strip_size_kb": 64, 00:08:28.800 "state": "configuring", 00:08:28.800 "raid_level": "concat", 00:08:28.800 "superblock": true, 00:08:28.800 "num_base_bdevs": 2, 00:08:28.800 "num_base_bdevs_discovered": 1, 00:08:28.800 "num_base_bdevs_operational": 2, 00:08:28.800 "base_bdevs_list": [ 00:08:28.800 { 00:08:28.800 "name": "BaseBdev1", 00:08:28.800 "uuid": "da35b758-c3e8-4f13-9144-e23f6a57c722", 00:08:28.800 "is_configured": true, 00:08:28.800 "data_offset": 2048, 00:08:28.800 "data_size": 63488 00:08:28.800 }, 00:08:28.800 { 00:08:28.800 "name": "BaseBdev2", 00:08:28.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.800 "is_configured": false, 00:08:28.800 "data_offset": 0, 00:08:28.800 "data_size": 0 00:08:28.800 } 00:08:28.800 ] 00:08:28.800 }' 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.800 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.058 [2024-11-26 20:21:22.546527] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.058 [2024-11-26 20:21:22.546690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.058 [2024-11-26 20:21:22.558569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.058 [2024-11-26 20:21:22.560530] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.058 [2024-11-26 20:21:22.560581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.058 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.316 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.316 "name": "Existed_Raid", 00:08:29.316 "uuid": "e95c2a04-140f-4b15-a384-916b36948813", 00:08:29.316 "strip_size_kb": 64, 00:08:29.316 "state": "configuring", 00:08:29.316 "raid_level": "concat", 00:08:29.316 "superblock": true, 00:08:29.316 "num_base_bdevs": 2, 00:08:29.316 "num_base_bdevs_discovered": 1, 00:08:29.316 "num_base_bdevs_operational": 2, 00:08:29.316 "base_bdevs_list": [ 00:08:29.316 { 00:08:29.316 "name": "BaseBdev1", 00:08:29.316 "uuid": "da35b758-c3e8-4f13-9144-e23f6a57c722", 00:08:29.316 "is_configured": true, 00:08:29.316 "data_offset": 2048, 00:08:29.316 "data_size": 63488 00:08:29.316 }, 00:08:29.316 { 00:08:29.316 "name": "BaseBdev2", 00:08:29.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.316 "is_configured": false, 00:08:29.316 "data_offset": 0, 00:08:29.316 "data_size": 0 00:08:29.316 } 00:08:29.316 ] 00:08:29.316 }' 00:08:29.316 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.316 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.574 [2024-11-26 20:21:22.983819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.574 [2024-11-26 20:21:22.984203] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:29.574 [2024-11-26 20:21:22.984272] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:29.574 [2024-11-26 20:21:22.984716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:29.574 BaseBdev2 00:08:29.574 [2024-11-26 20:21:22.984949] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:29.574 [2024-11-26 20:21:22.984973] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:29.574 [2024-11-26 20:21:22.985134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.574 20:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.574 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.574 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.574 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.574 [ 00:08:29.574 { 00:08:29.574 "name": "BaseBdev2", 00:08:29.574 "aliases": [ 00:08:29.575 "3879c567-a681-4da2-96bc-c8fa20b271b4" 00:08:29.575 ], 00:08:29.575 "product_name": "Malloc disk", 00:08:29.575 "block_size": 512, 00:08:29.575 "num_blocks": 65536, 00:08:29.575 "uuid": "3879c567-a681-4da2-96bc-c8fa20b271b4", 00:08:29.575 "assigned_rate_limits": { 00:08:29.575 "rw_ios_per_sec": 0, 00:08:29.575 "rw_mbytes_per_sec": 0, 00:08:29.575 "r_mbytes_per_sec": 0, 00:08:29.575 "w_mbytes_per_sec": 0 00:08:29.575 }, 00:08:29.575 "claimed": true, 00:08:29.575 "claim_type": "exclusive_write", 00:08:29.575 "zoned": false, 00:08:29.575 "supported_io_types": { 00:08:29.575 "read": true, 00:08:29.575 "write": true, 00:08:29.575 "unmap": true, 00:08:29.575 "flush": true, 00:08:29.575 "reset": true, 00:08:29.575 "nvme_admin": false, 00:08:29.575 "nvme_io": false, 00:08:29.575 "nvme_io_md": false, 00:08:29.575 "write_zeroes": true, 00:08:29.575 "zcopy": true, 00:08:29.575 "get_zone_info": false, 00:08:29.575 "zone_management": false, 00:08:29.575 "zone_append": false, 00:08:29.575 "compare": false, 00:08:29.575 "compare_and_write": false, 00:08:29.575 "abort": true, 00:08:29.575 "seek_hole": false, 00:08:29.575 "seek_data": false, 00:08:29.575 "copy": true, 00:08:29.575 "nvme_iov_md": false 00:08:29.575 }, 00:08:29.575 "memory_domains": [ 00:08:29.575 { 00:08:29.575 "dma_device_id": "system", 00:08:29.575 "dma_device_type": 1 00:08:29.575 }, 00:08:29.575 { 00:08:29.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.575 "dma_device_type": 2 00:08:29.575 } 00:08:29.575 ], 00:08:29.575 "driver_specific": {} 00:08:29.575 } 00:08:29.575 ] 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.575 "name": "Existed_Raid", 00:08:29.575 "uuid": "e95c2a04-140f-4b15-a384-916b36948813", 00:08:29.575 "strip_size_kb": 64, 00:08:29.575 "state": "online", 00:08:29.575 "raid_level": "concat", 00:08:29.575 "superblock": true, 00:08:29.575 "num_base_bdevs": 2, 00:08:29.575 "num_base_bdevs_discovered": 2, 00:08:29.575 "num_base_bdevs_operational": 2, 00:08:29.575 "base_bdevs_list": [ 00:08:29.575 { 00:08:29.575 "name": "BaseBdev1", 00:08:29.575 "uuid": "da35b758-c3e8-4f13-9144-e23f6a57c722", 00:08:29.575 "is_configured": true, 00:08:29.575 "data_offset": 2048, 00:08:29.575 "data_size": 63488 00:08:29.575 }, 00:08:29.575 { 00:08:29.575 "name": "BaseBdev2", 00:08:29.575 "uuid": "3879c567-a681-4da2-96bc-c8fa20b271b4", 00:08:29.575 "is_configured": true, 00:08:29.575 "data_offset": 2048, 00:08:29.575 "data_size": 63488 00:08:29.575 } 00:08:29.575 ] 00:08:29.575 }' 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.575 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.140 [2024-11-26 20:21:23.427451] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.140 "name": "Existed_Raid", 00:08:30.140 "aliases": [ 00:08:30.140 "e95c2a04-140f-4b15-a384-916b36948813" 00:08:30.140 ], 00:08:30.140 "product_name": "Raid Volume", 00:08:30.140 "block_size": 512, 00:08:30.140 "num_blocks": 126976, 00:08:30.140 "uuid": "e95c2a04-140f-4b15-a384-916b36948813", 00:08:30.140 "assigned_rate_limits": { 00:08:30.140 "rw_ios_per_sec": 0, 00:08:30.140 "rw_mbytes_per_sec": 0, 00:08:30.140 "r_mbytes_per_sec": 0, 00:08:30.140 "w_mbytes_per_sec": 0 00:08:30.140 }, 00:08:30.140 "claimed": false, 00:08:30.140 "zoned": false, 00:08:30.140 "supported_io_types": { 00:08:30.140 "read": true, 00:08:30.140 "write": true, 00:08:30.140 "unmap": true, 00:08:30.140 "flush": true, 00:08:30.140 "reset": true, 00:08:30.140 "nvme_admin": false, 00:08:30.140 "nvme_io": false, 00:08:30.140 "nvme_io_md": false, 00:08:30.140 "write_zeroes": true, 00:08:30.140 "zcopy": false, 00:08:30.140 "get_zone_info": false, 00:08:30.140 "zone_management": false, 00:08:30.140 "zone_append": false, 00:08:30.140 "compare": false, 00:08:30.140 "compare_and_write": false, 00:08:30.140 "abort": false, 00:08:30.140 "seek_hole": false, 00:08:30.140 "seek_data": false, 00:08:30.140 "copy": false, 00:08:30.140 "nvme_iov_md": false 00:08:30.140 }, 00:08:30.140 "memory_domains": [ 00:08:30.140 { 00:08:30.140 "dma_device_id": "system", 00:08:30.140 "dma_device_type": 1 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.140 "dma_device_type": 2 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "dma_device_id": "system", 00:08:30.140 "dma_device_type": 1 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.140 "dma_device_type": 2 00:08:30.140 } 00:08:30.140 ], 00:08:30.140 "driver_specific": { 00:08:30.140 "raid": { 00:08:30.140 "uuid": "e95c2a04-140f-4b15-a384-916b36948813", 00:08:30.140 "strip_size_kb": 64, 00:08:30.140 "state": "online", 00:08:30.140 "raid_level": "concat", 00:08:30.140 "superblock": true, 00:08:30.140 "num_base_bdevs": 2, 00:08:30.140 "num_base_bdevs_discovered": 2, 00:08:30.140 "num_base_bdevs_operational": 2, 00:08:30.140 "base_bdevs_list": [ 00:08:30.140 { 00:08:30.140 "name": "BaseBdev1", 00:08:30.140 "uuid": "da35b758-c3e8-4f13-9144-e23f6a57c722", 00:08:30.140 "is_configured": true, 00:08:30.140 "data_offset": 2048, 00:08:30.140 "data_size": 63488 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "name": "BaseBdev2", 00:08:30.140 "uuid": "3879c567-a681-4da2-96bc-c8fa20b271b4", 00:08:30.140 "is_configured": true, 00:08:30.140 "data_offset": 2048, 00:08:30.140 "data_size": 63488 00:08:30.140 } 00:08:30.140 ] 00:08:30.140 } 00:08:30.140 } 00:08:30.140 }' 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:30.140 BaseBdev2' 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.140 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.141 [2024-11-26 20:21:23.634873] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.141 [2024-11-26 20:21:23.634910] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.141 [2024-11-26 20:21:23.634965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.141 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.397 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.397 "name": "Existed_Raid", 00:08:30.397 "uuid": "e95c2a04-140f-4b15-a384-916b36948813", 00:08:30.397 "strip_size_kb": 64, 00:08:30.397 "state": "offline", 00:08:30.397 "raid_level": "concat", 00:08:30.397 "superblock": true, 00:08:30.397 "num_base_bdevs": 2, 00:08:30.397 "num_base_bdevs_discovered": 1, 00:08:30.397 "num_base_bdevs_operational": 1, 00:08:30.397 "base_bdevs_list": [ 00:08:30.397 { 00:08:30.397 "name": null, 00:08:30.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.397 "is_configured": false, 00:08:30.398 "data_offset": 0, 00:08:30.398 "data_size": 63488 00:08:30.398 }, 00:08:30.398 { 00:08:30.398 "name": "BaseBdev2", 00:08:30.398 "uuid": "3879c567-a681-4da2-96bc-c8fa20b271b4", 00:08:30.398 "is_configured": true, 00:08:30.398 "data_offset": 2048, 00:08:30.398 "data_size": 63488 00:08:30.398 } 00:08:30.398 ] 00:08:30.398 }' 00:08:30.398 20:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.398 20:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.656 [2024-11-26 20:21:24.179793] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:30.656 [2024-11-26 20:21:24.179899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:30.656 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73655 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73655 ']' 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73655 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73655 00:08:30.938 killing process with pid 73655 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73655' 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73655 00:08:30.938 [2024-11-26 20:21:24.276449] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.938 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73655 00:08:30.938 [2024-11-26 20:21:24.278036] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.197 20:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.197 ************************************ 00:08:31.197 END TEST raid_state_function_test_sb 00:08:31.197 ************************************ 00:08:31.197 00:08:31.197 real 0m3.987s 00:08:31.197 user 0m6.066s 00:08:31.197 sys 0m0.862s 00:08:31.197 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.197 20:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.197 20:21:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:31.197 20:21:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:31.197 20:21:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.197 20:21:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.197 ************************************ 00:08:31.197 START TEST raid_superblock_test 00:08:31.197 ************************************ 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73896 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73896 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73896 ']' 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.197 20:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.455 [2024-11-26 20:21:24.800417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:31.455 [2024-11-26 20:21:24.800638] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73896 ] 00:08:31.455 [2024-11-26 20:21:24.961576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.712 [2024-11-26 20:21:25.045693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.712 [2024-11-26 20:21:25.119275] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.712 [2024-11-26 20:21:25.119411] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.277 malloc1 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.277 [2024-11-26 20:21:25.723439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.277 [2024-11-26 20:21:25.723519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.277 [2024-11-26 20:21:25.723541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:32.277 [2024-11-26 20:21:25.723557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.277 [2024-11-26 20:21:25.725951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.277 [2024-11-26 20:21:25.726001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.277 pt1 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:32.277 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.278 malloc2 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.278 [2024-11-26 20:21:25.768774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.278 [2024-11-26 20:21:25.768893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.278 [2024-11-26 20:21:25.768930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:32.278 [2024-11-26 20:21:25.768985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.278 [2024-11-26 20:21:25.771444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.278 [2024-11-26 20:21:25.771535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.278 pt2 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.278 [2024-11-26 20:21:25.780835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.278 [2024-11-26 20:21:25.783042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.278 [2024-11-26 20:21:25.783262] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:32.278 [2024-11-26 20:21:25.783327] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:32.278 [2024-11-26 20:21:25.783679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:32.278 [2024-11-26 20:21:25.783892] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:32.278 [2024-11-26 20:21:25.783939] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:32.278 [2024-11-26 20:21:25.784139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.278 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.556 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.556 "name": "raid_bdev1", 00:08:32.556 "uuid": "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7", 00:08:32.556 "strip_size_kb": 64, 00:08:32.556 "state": "online", 00:08:32.556 "raid_level": "concat", 00:08:32.556 "superblock": true, 00:08:32.556 "num_base_bdevs": 2, 00:08:32.556 "num_base_bdevs_discovered": 2, 00:08:32.556 "num_base_bdevs_operational": 2, 00:08:32.556 "base_bdevs_list": [ 00:08:32.556 { 00:08:32.556 "name": "pt1", 00:08:32.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.556 "is_configured": true, 00:08:32.556 "data_offset": 2048, 00:08:32.556 "data_size": 63488 00:08:32.556 }, 00:08:32.556 { 00:08:32.556 "name": "pt2", 00:08:32.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.556 "is_configured": true, 00:08:32.556 "data_offset": 2048, 00:08:32.556 "data_size": 63488 00:08:32.556 } 00:08:32.556 ] 00:08:32.556 }' 00:08:32.556 20:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.556 20:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.815 [2024-11-26 20:21:26.252405] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.815 "name": "raid_bdev1", 00:08:32.815 "aliases": [ 00:08:32.815 "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7" 00:08:32.815 ], 00:08:32.815 "product_name": "Raid Volume", 00:08:32.815 "block_size": 512, 00:08:32.815 "num_blocks": 126976, 00:08:32.815 "uuid": "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7", 00:08:32.815 "assigned_rate_limits": { 00:08:32.815 "rw_ios_per_sec": 0, 00:08:32.815 "rw_mbytes_per_sec": 0, 00:08:32.815 "r_mbytes_per_sec": 0, 00:08:32.815 "w_mbytes_per_sec": 0 00:08:32.815 }, 00:08:32.815 "claimed": false, 00:08:32.815 "zoned": false, 00:08:32.815 "supported_io_types": { 00:08:32.815 "read": true, 00:08:32.815 "write": true, 00:08:32.815 "unmap": true, 00:08:32.815 "flush": true, 00:08:32.815 "reset": true, 00:08:32.815 "nvme_admin": false, 00:08:32.815 "nvme_io": false, 00:08:32.815 "nvme_io_md": false, 00:08:32.815 "write_zeroes": true, 00:08:32.815 "zcopy": false, 00:08:32.815 "get_zone_info": false, 00:08:32.815 "zone_management": false, 00:08:32.815 "zone_append": false, 00:08:32.815 "compare": false, 00:08:32.815 "compare_and_write": false, 00:08:32.815 "abort": false, 00:08:32.815 "seek_hole": false, 00:08:32.815 "seek_data": false, 00:08:32.815 "copy": false, 00:08:32.815 "nvme_iov_md": false 00:08:32.815 }, 00:08:32.815 "memory_domains": [ 00:08:32.815 { 00:08:32.815 "dma_device_id": "system", 00:08:32.815 "dma_device_type": 1 00:08:32.815 }, 00:08:32.815 { 00:08:32.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.815 "dma_device_type": 2 00:08:32.815 }, 00:08:32.815 { 00:08:32.815 "dma_device_id": "system", 00:08:32.815 "dma_device_type": 1 00:08:32.815 }, 00:08:32.815 { 00:08:32.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.815 "dma_device_type": 2 00:08:32.815 } 00:08:32.815 ], 00:08:32.815 "driver_specific": { 00:08:32.815 "raid": { 00:08:32.815 "uuid": "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7", 00:08:32.815 "strip_size_kb": 64, 00:08:32.815 "state": "online", 00:08:32.815 "raid_level": "concat", 00:08:32.815 "superblock": true, 00:08:32.815 "num_base_bdevs": 2, 00:08:32.815 "num_base_bdevs_discovered": 2, 00:08:32.815 "num_base_bdevs_operational": 2, 00:08:32.815 "base_bdevs_list": [ 00:08:32.815 { 00:08:32.815 "name": "pt1", 00:08:32.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.815 "is_configured": true, 00:08:32.815 "data_offset": 2048, 00:08:32.815 "data_size": 63488 00:08:32.815 }, 00:08:32.815 { 00:08:32.815 "name": "pt2", 00:08:32.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.815 "is_configured": true, 00:08:32.815 "data_offset": 2048, 00:08:32.815 "data_size": 63488 00:08:32.815 } 00:08:32.815 ] 00:08:32.815 } 00:08:32.815 } 00:08:32.815 }' 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:32.815 pt2' 00:08:32.815 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 [2024-11-26 20:21:26.488320] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0d1203bf-29a8-44d4-bbb5-d254f5b48ef7 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0d1203bf-29a8-44d4-bbb5-d254f5b48ef7 ']' 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.084 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 [2024-11-26 20:21:26.535992] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.084 [2024-11-26 20:21:26.536033] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.084 [2024-11-26 20:21:26.536136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.084 [2024-11-26 20:21:26.536195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.085 [2024-11-26 20:21:26.536217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:33.085 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.395 [2024-11-26 20:21:26.655946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:33.395 [2024-11-26 20:21:26.658199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:33.395 [2024-11-26 20:21:26.658381] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:33.395 [2024-11-26 20:21:26.658497] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:33.395 [2024-11-26 20:21:26.658563] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.395 [2024-11-26 20:21:26.658625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:33.395 request: 00:08:33.395 { 00:08:33.395 "name": "raid_bdev1", 00:08:33.395 "raid_level": "concat", 00:08:33.395 "base_bdevs": [ 00:08:33.395 "malloc1", 00:08:33.395 "malloc2" 00:08:33.395 ], 00:08:33.395 "strip_size_kb": 64, 00:08:33.395 "superblock": false, 00:08:33.395 "method": "bdev_raid_create", 00:08:33.395 "req_id": 1 00:08:33.395 } 00:08:33.395 Got JSON-RPC error response 00:08:33.395 response: 00:08:33.395 { 00:08:33.395 "code": -17, 00:08:33.395 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:33.395 } 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.395 [2024-11-26 20:21:26.719814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:33.395 [2024-11-26 20:21:26.719908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.395 [2024-11-26 20:21:26.719931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:33.395 [2024-11-26 20:21:26.719941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.395 [2024-11-26 20:21:26.722553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.395 [2024-11-26 20:21:26.722685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:33.395 [2024-11-26 20:21:26.722800] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:33.395 [2024-11-26 20:21:26.722854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:33.395 pt1 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.395 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.396 "name": "raid_bdev1", 00:08:33.396 "uuid": "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7", 00:08:33.396 "strip_size_kb": 64, 00:08:33.396 "state": "configuring", 00:08:33.396 "raid_level": "concat", 00:08:33.396 "superblock": true, 00:08:33.396 "num_base_bdevs": 2, 00:08:33.396 "num_base_bdevs_discovered": 1, 00:08:33.396 "num_base_bdevs_operational": 2, 00:08:33.396 "base_bdevs_list": [ 00:08:33.396 { 00:08:33.396 "name": "pt1", 00:08:33.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.396 "is_configured": true, 00:08:33.396 "data_offset": 2048, 00:08:33.396 "data_size": 63488 00:08:33.396 }, 00:08:33.396 { 00:08:33.396 "name": null, 00:08:33.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.396 "is_configured": false, 00:08:33.396 "data_offset": 2048, 00:08:33.396 "data_size": 63488 00:08:33.396 } 00:08:33.396 ] 00:08:33.396 }' 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.396 20:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.654 [2024-11-26 20:21:27.167043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.654 [2024-11-26 20:21:27.167209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.654 [2024-11-26 20:21:27.167268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:33.654 [2024-11-26 20:21:27.167303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.654 [2024-11-26 20:21:27.167820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.654 [2024-11-26 20:21:27.167898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.654 [2024-11-26 20:21:27.168024] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:33.654 [2024-11-26 20:21:27.168082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.654 [2024-11-26 20:21:27.168213] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:33.654 [2024-11-26 20:21:27.168253] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:33.654 [2024-11-26 20:21:27.168549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:33.654 [2024-11-26 20:21:27.168745] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:33.654 [2024-11-26 20:21:27.168799] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:33.654 [2024-11-26 20:21:27.168969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.654 pt2 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.654 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.912 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.912 "name": "raid_bdev1", 00:08:33.912 "uuid": "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7", 00:08:33.912 "strip_size_kb": 64, 00:08:33.912 "state": "online", 00:08:33.912 "raid_level": "concat", 00:08:33.912 "superblock": true, 00:08:33.912 "num_base_bdevs": 2, 00:08:33.912 "num_base_bdevs_discovered": 2, 00:08:33.912 "num_base_bdevs_operational": 2, 00:08:33.912 "base_bdevs_list": [ 00:08:33.912 { 00:08:33.912 "name": "pt1", 00:08:33.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.912 "is_configured": true, 00:08:33.912 "data_offset": 2048, 00:08:33.912 "data_size": 63488 00:08:33.912 }, 00:08:33.912 { 00:08:33.912 "name": "pt2", 00:08:33.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.912 "is_configured": true, 00:08:33.912 "data_offset": 2048, 00:08:33.912 "data_size": 63488 00:08:33.912 } 00:08:33.912 ] 00:08:33.912 }' 00:08:33.912 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.912 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.171 [2024-11-26 20:21:27.626720] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.171 "name": "raid_bdev1", 00:08:34.171 "aliases": [ 00:08:34.171 "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7" 00:08:34.171 ], 00:08:34.171 "product_name": "Raid Volume", 00:08:34.171 "block_size": 512, 00:08:34.171 "num_blocks": 126976, 00:08:34.171 "uuid": "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7", 00:08:34.171 "assigned_rate_limits": { 00:08:34.171 "rw_ios_per_sec": 0, 00:08:34.171 "rw_mbytes_per_sec": 0, 00:08:34.171 "r_mbytes_per_sec": 0, 00:08:34.171 "w_mbytes_per_sec": 0 00:08:34.171 }, 00:08:34.171 "claimed": false, 00:08:34.171 "zoned": false, 00:08:34.171 "supported_io_types": { 00:08:34.171 "read": true, 00:08:34.171 "write": true, 00:08:34.171 "unmap": true, 00:08:34.171 "flush": true, 00:08:34.171 "reset": true, 00:08:34.171 "nvme_admin": false, 00:08:34.171 "nvme_io": false, 00:08:34.171 "nvme_io_md": false, 00:08:34.171 "write_zeroes": true, 00:08:34.171 "zcopy": false, 00:08:34.171 "get_zone_info": false, 00:08:34.171 "zone_management": false, 00:08:34.171 "zone_append": false, 00:08:34.171 "compare": false, 00:08:34.171 "compare_and_write": false, 00:08:34.171 "abort": false, 00:08:34.171 "seek_hole": false, 00:08:34.171 "seek_data": false, 00:08:34.171 "copy": false, 00:08:34.171 "nvme_iov_md": false 00:08:34.171 }, 00:08:34.171 "memory_domains": [ 00:08:34.171 { 00:08:34.171 "dma_device_id": "system", 00:08:34.171 "dma_device_type": 1 00:08:34.171 }, 00:08:34.171 { 00:08:34.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.171 "dma_device_type": 2 00:08:34.171 }, 00:08:34.171 { 00:08:34.171 "dma_device_id": "system", 00:08:34.171 "dma_device_type": 1 00:08:34.171 }, 00:08:34.171 { 00:08:34.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.171 "dma_device_type": 2 00:08:34.171 } 00:08:34.171 ], 00:08:34.171 "driver_specific": { 00:08:34.171 "raid": { 00:08:34.171 "uuid": "0d1203bf-29a8-44d4-bbb5-d254f5b48ef7", 00:08:34.171 "strip_size_kb": 64, 00:08:34.171 "state": "online", 00:08:34.171 "raid_level": "concat", 00:08:34.171 "superblock": true, 00:08:34.171 "num_base_bdevs": 2, 00:08:34.171 "num_base_bdevs_discovered": 2, 00:08:34.171 "num_base_bdevs_operational": 2, 00:08:34.171 "base_bdevs_list": [ 00:08:34.171 { 00:08:34.171 "name": "pt1", 00:08:34.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.171 "is_configured": true, 00:08:34.171 "data_offset": 2048, 00:08:34.171 "data_size": 63488 00:08:34.171 }, 00:08:34.171 { 00:08:34.171 "name": "pt2", 00:08:34.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.171 "is_configured": true, 00:08:34.171 "data_offset": 2048, 00:08:34.171 "data_size": 63488 00:08:34.171 } 00:08:34.171 ] 00:08:34.171 } 00:08:34.171 } 00:08:34.171 }' 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:34.171 pt2' 00:08:34.171 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.429 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:34.430 [2024-11-26 20:21:27.842217] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0d1203bf-29a8-44d4-bbb5-d254f5b48ef7 '!=' 0d1203bf-29a8-44d4-bbb5-d254f5b48ef7 ']' 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73896 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73896 ']' 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73896 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73896 00:08:34.430 killing process with pid 73896 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73896' 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73896 00:08:34.430 [2024-11-26 20:21:27.932625] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.430 [2024-11-26 20:21:27.932733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.430 [2024-11-26 20:21:27.932795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.430 [2024-11-26 20:21:27.932806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:34.430 20:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73896 00:08:34.430 [2024-11-26 20:21:27.971015] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.996 20:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:34.996 00:08:34.996 real 0m3.643s 00:08:34.996 user 0m5.446s 00:08:34.996 sys 0m0.846s 00:08:34.996 20:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.996 20:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.996 ************************************ 00:08:34.996 END TEST raid_superblock_test 00:08:34.996 ************************************ 00:08:34.996 20:21:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:34.996 20:21:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:34.996 20:21:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.996 20:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.996 ************************************ 00:08:34.996 START TEST raid_read_error_test 00:08:34.996 ************************************ 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Nkq0hdpspk 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74097 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74097 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74097 ']' 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.996 20:21:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.996 [2024-11-26 20:21:28.526610] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:34.996 [2024-11-26 20:21:28.526894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74097 ] 00:08:35.255 [2024-11-26 20:21:28.693562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.255 [2024-11-26 20:21:28.780377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.513 [2024-11-26 20:21:28.859606] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.513 [2024-11-26 20:21:28.859663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.084 BaseBdev1_malloc 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.084 true 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.084 [2024-11-26 20:21:29.440722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.084 [2024-11-26 20:21:29.440847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.084 [2024-11-26 20:21:29.440893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.084 [2024-11-26 20:21:29.440948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.084 [2024-11-26 20:21:29.443558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.084 [2024-11-26 20:21:29.443660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.084 BaseBdev1 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.084 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.084 BaseBdev2_malloc 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.085 true 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.085 [2024-11-26 20:21:29.498187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.085 [2024-11-26 20:21:29.498255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.085 [2024-11-26 20:21:29.498280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.085 [2024-11-26 20:21:29.498289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.085 [2024-11-26 20:21:29.500521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.085 [2024-11-26 20:21:29.500642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.085 BaseBdev2 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.085 [2024-11-26 20:21:29.510200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.085 [2024-11-26 20:21:29.512176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.085 [2024-11-26 20:21:29.512435] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:36.085 [2024-11-26 20:21:29.512452] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:36.085 [2024-11-26 20:21:29.512761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:36.085 [2024-11-26 20:21:29.512904] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:36.085 [2024-11-26 20:21:29.512919] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:36.085 [2024-11-26 20:21:29.513077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.085 "name": "raid_bdev1", 00:08:36.085 "uuid": "9d1306db-068a-4c51-a34a-efe8c5b06656", 00:08:36.085 "strip_size_kb": 64, 00:08:36.085 "state": "online", 00:08:36.085 "raid_level": "concat", 00:08:36.085 "superblock": true, 00:08:36.085 "num_base_bdevs": 2, 00:08:36.085 "num_base_bdevs_discovered": 2, 00:08:36.085 "num_base_bdevs_operational": 2, 00:08:36.085 "base_bdevs_list": [ 00:08:36.085 { 00:08:36.085 "name": "BaseBdev1", 00:08:36.085 "uuid": "86d4cbd0-8a08-53c5-ac20-e3c5c8bf2776", 00:08:36.085 "is_configured": true, 00:08:36.085 "data_offset": 2048, 00:08:36.085 "data_size": 63488 00:08:36.085 }, 00:08:36.085 { 00:08:36.085 "name": "BaseBdev2", 00:08:36.085 "uuid": "374df359-24ca-5d28-86d6-df96ed923e80", 00:08:36.085 "is_configured": true, 00:08:36.085 "data_offset": 2048, 00:08:36.085 "data_size": 63488 00:08:36.085 } 00:08:36.085 ] 00:08:36.085 }' 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.085 20:21:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.652 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.652 20:21:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.652 [2024-11-26 20:21:30.009765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.584 20:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.585 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.585 "name": "raid_bdev1", 00:08:37.585 "uuid": "9d1306db-068a-4c51-a34a-efe8c5b06656", 00:08:37.585 "strip_size_kb": 64, 00:08:37.585 "state": "online", 00:08:37.585 "raid_level": "concat", 00:08:37.585 "superblock": true, 00:08:37.585 "num_base_bdevs": 2, 00:08:37.585 "num_base_bdevs_discovered": 2, 00:08:37.585 "num_base_bdevs_operational": 2, 00:08:37.585 "base_bdevs_list": [ 00:08:37.585 { 00:08:37.585 "name": "BaseBdev1", 00:08:37.585 "uuid": "86d4cbd0-8a08-53c5-ac20-e3c5c8bf2776", 00:08:37.585 "is_configured": true, 00:08:37.585 "data_offset": 2048, 00:08:37.585 "data_size": 63488 00:08:37.585 }, 00:08:37.585 { 00:08:37.585 "name": "BaseBdev2", 00:08:37.585 "uuid": "374df359-24ca-5d28-86d6-df96ed923e80", 00:08:37.585 "is_configured": true, 00:08:37.585 "data_offset": 2048, 00:08:37.585 "data_size": 63488 00:08:37.585 } 00:08:37.585 ] 00:08:37.585 }' 00:08:37.585 20:21:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.585 20:21:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.842 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:37.842 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.842 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.101 [2024-11-26 20:21:31.395469] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.101 [2024-11-26 20:21:31.395587] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.101 [2024-11-26 20:21:31.398597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.101 [2024-11-26 20:21:31.398700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.101 [2024-11-26 20:21:31.398763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.101 [2024-11-26 20:21:31.398814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:38.101 { 00:08:38.101 "results": [ 00:08:38.101 { 00:08:38.101 "job": "raid_bdev1", 00:08:38.101 "core_mask": "0x1", 00:08:38.101 "workload": "randrw", 00:08:38.101 "percentage": 50, 00:08:38.101 "status": "finished", 00:08:38.101 "queue_depth": 1, 00:08:38.101 "io_size": 131072, 00:08:38.101 "runtime": 1.38664, 00:08:38.101 "iops": 13927.912075232216, 00:08:38.101 "mibps": 1740.989009404027, 00:08:38.101 "io_failed": 1, 00:08:38.101 "io_timeout": 0, 00:08:38.101 "avg_latency_us": 100.23289050230777, 00:08:38.101 "min_latency_us": 26.270742358078603, 00:08:38.101 "max_latency_us": 1645.5545851528384 00:08:38.101 } 00:08:38.101 ], 00:08:38.101 "core_count": 1 00:08:38.101 } 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74097 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74097 ']' 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74097 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74097 00:08:38.101 killing process with pid 74097 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74097' 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74097 00:08:38.101 [2024-11-26 20:21:31.449096] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.101 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74097 00:08:38.101 [2024-11-26 20:21:31.472832] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Nkq0hdpspk 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:38.359 ************************************ 00:08:38.359 END TEST raid_read_error_test 00:08:38.359 ************************************ 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:38.359 00:08:38.359 real 0m3.439s 00:08:38.359 user 0m4.256s 00:08:38.359 sys 0m0.620s 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.359 20:21:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.359 20:21:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:38.359 20:21:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:38.359 20:21:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.359 20:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.618 ************************************ 00:08:38.618 START TEST raid_write_error_test 00:08:38.618 ************************************ 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:38.618 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hypIf6Ag7J 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74226 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74226 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74226 ']' 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.619 20:21:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.619 [2024-11-26 20:21:32.026762] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.619 [2024-11-26 20:21:32.026986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74226 ] 00:08:38.876 [2024-11-26 20:21:32.192952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.876 [2024-11-26 20:21:32.279312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.876 [2024-11-26 20:21:32.357319] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.876 [2024-11-26 20:21:32.357379] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 BaseBdev1_malloc 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 true 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.443 [2024-11-26 20:21:32.964019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.443 [2024-11-26 20:21:32.964096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.443 [2024-11-26 20:21:32.964122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.443 [2024-11-26 20:21:32.964133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.443 [2024-11-26 20:21:32.966640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.443 [2024-11-26 20:21:32.966681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.443 BaseBdev1 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.443 20:21:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.702 BaseBdev2_malloc 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.702 true 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.702 [2024-11-26 20:21:33.024763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.702 [2024-11-26 20:21:33.024956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.702 [2024-11-26 20:21:33.024998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:39.702 [2024-11-26 20:21:33.025012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.702 [2024-11-26 20:21:33.028382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.702 [2024-11-26 20:21:33.028503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.702 BaseBdev2 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.702 [2024-11-26 20:21:33.036814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.702 [2024-11-26 20:21:33.039001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.702 [2024-11-26 20:21:33.039286] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:39.702 [2024-11-26 20:21:33.039306] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:39.702 [2024-11-26 20:21:33.039649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:39.702 [2024-11-26 20:21:33.039829] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:39.702 [2024-11-26 20:21:33.039859] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:39.702 [2024-11-26 20:21:33.040050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.702 "name": "raid_bdev1", 00:08:39.702 "uuid": "bb0b3fea-2046-4f43-a70e-6986c770ce24", 00:08:39.702 "strip_size_kb": 64, 00:08:39.702 "state": "online", 00:08:39.702 "raid_level": "concat", 00:08:39.702 "superblock": true, 00:08:39.702 "num_base_bdevs": 2, 00:08:39.702 "num_base_bdevs_discovered": 2, 00:08:39.702 "num_base_bdevs_operational": 2, 00:08:39.702 "base_bdevs_list": [ 00:08:39.702 { 00:08:39.702 "name": "BaseBdev1", 00:08:39.702 "uuid": "8cecf75b-b989-5556-839e-d6c0bc0b2c66", 00:08:39.702 "is_configured": true, 00:08:39.702 "data_offset": 2048, 00:08:39.702 "data_size": 63488 00:08:39.702 }, 00:08:39.702 { 00:08:39.702 "name": "BaseBdev2", 00:08:39.702 "uuid": "7adbb768-4a73-5d8e-8a4e-0ce4d3324b1f", 00:08:39.702 "is_configured": true, 00:08:39.702 "data_offset": 2048, 00:08:39.702 "data_size": 63488 00:08:39.702 } 00:08:39.702 ] 00:08:39.702 }' 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.702 20:21:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.269 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:40.269 20:21:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:40.269 [2024-11-26 20:21:33.616254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.207 "name": "raid_bdev1", 00:08:41.207 "uuid": "bb0b3fea-2046-4f43-a70e-6986c770ce24", 00:08:41.207 "strip_size_kb": 64, 00:08:41.207 "state": "online", 00:08:41.207 "raid_level": "concat", 00:08:41.207 "superblock": true, 00:08:41.207 "num_base_bdevs": 2, 00:08:41.207 "num_base_bdevs_discovered": 2, 00:08:41.207 "num_base_bdevs_operational": 2, 00:08:41.207 "base_bdevs_list": [ 00:08:41.207 { 00:08:41.207 "name": "BaseBdev1", 00:08:41.207 "uuid": "8cecf75b-b989-5556-839e-d6c0bc0b2c66", 00:08:41.207 "is_configured": true, 00:08:41.207 "data_offset": 2048, 00:08:41.207 "data_size": 63488 00:08:41.207 }, 00:08:41.207 { 00:08:41.207 "name": "BaseBdev2", 00:08:41.207 "uuid": "7adbb768-4a73-5d8e-8a4e-0ce4d3324b1f", 00:08:41.207 "is_configured": true, 00:08:41.207 "data_offset": 2048, 00:08:41.207 "data_size": 63488 00:08:41.207 } 00:08:41.207 ] 00:08:41.207 }' 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.207 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.466 20:21:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.466 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.466 20:21:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.466 [2024-11-26 20:21:35.001297] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.466 [2024-11-26 20:21:35.001394] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.466 [2024-11-26 20:21:35.004364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.466 [2024-11-26 20:21:35.004465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.466 [2024-11-26 20:21:35.004541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.466 [2024-11-26 20:21:35.004589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:41.466 { 00:08:41.466 "results": [ 00:08:41.466 { 00:08:41.466 "job": "raid_bdev1", 00:08:41.466 "core_mask": "0x1", 00:08:41.466 "workload": "randrw", 00:08:41.466 "percentage": 50, 00:08:41.466 "status": "finished", 00:08:41.466 "queue_depth": 1, 00:08:41.466 "io_size": 131072, 00:08:41.466 "runtime": 1.385718, 00:08:41.466 "iops": 13777.695028858685, 00:08:41.466 "mibps": 1722.2118786073356, 00:08:41.466 "io_failed": 1, 00:08:41.466 "io_timeout": 0, 00:08:41.466 "avg_latency_us": 101.29963778764343, 00:08:41.466 "min_latency_us": 27.276855895196505, 00:08:41.466 "max_latency_us": 1645.5545851528384 00:08:41.467 } 00:08:41.467 ], 00:08:41.467 "core_count": 1 00:08:41.467 } 00:08:41.467 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.467 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74226 00:08:41.467 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74226 ']' 00:08:41.467 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74226 00:08:41.467 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:41.467 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.467 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74226 00:08:41.726 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.726 killing process with pid 74226 00:08:41.726 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.726 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74226' 00:08:41.726 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74226 00:08:41.726 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74226 00:08:41.726 [2024-11-26 20:21:35.046033] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.726 [2024-11-26 20:21:35.073726] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hypIf6Ag7J 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:41.985 00:08:41.985 real 0m3.539s 00:08:41.985 user 0m4.412s 00:08:41.985 sys 0m0.617s 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.985 ************************************ 00:08:41.985 END TEST raid_write_error_test 00:08:41.985 ************************************ 00:08:41.985 20:21:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.985 20:21:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:41.985 20:21:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:41.985 20:21:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:41.985 20:21:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.985 20:21:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.985 ************************************ 00:08:41.985 START TEST raid_state_function_test 00:08:41.985 ************************************ 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:41.985 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.986 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.986 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74364 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74364' 00:08:42.244 Process raid pid: 74364 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74364 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74364 ']' 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.244 20:21:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.244 [2024-11-26 20:21:35.628456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:42.244 [2024-11-26 20:21:35.628742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.502 [2024-11-26 20:21:35.797353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.502 [2024-11-26 20:21:35.883369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.502 [2024-11-26 20:21:35.955043] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.502 [2024-11-26 20:21:35.955170] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.070 [2024-11-26 20:21:36.532074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.070 [2024-11-26 20:21:36.532200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.070 [2024-11-26 20:21:36.532220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.070 [2024-11-26 20:21:36.532233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.070 "name": "Existed_Raid", 00:08:43.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.070 "strip_size_kb": 0, 00:08:43.070 "state": "configuring", 00:08:43.070 "raid_level": "raid1", 00:08:43.070 "superblock": false, 00:08:43.070 "num_base_bdevs": 2, 00:08:43.070 "num_base_bdevs_discovered": 0, 00:08:43.070 "num_base_bdevs_operational": 2, 00:08:43.070 "base_bdevs_list": [ 00:08:43.070 { 00:08:43.070 "name": "BaseBdev1", 00:08:43.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.070 "is_configured": false, 00:08:43.070 "data_offset": 0, 00:08:43.070 "data_size": 0 00:08:43.070 }, 00:08:43.070 { 00:08:43.070 "name": "BaseBdev2", 00:08:43.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.070 "is_configured": false, 00:08:43.070 "data_offset": 0, 00:08:43.070 "data_size": 0 00:08:43.070 } 00:08:43.070 ] 00:08:43.070 }' 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.070 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.677 20:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.677 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.677 20:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.677 [2024-11-26 20:21:37.007217] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.677 [2024-11-26 20:21:37.007272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.677 [2024-11-26 20:21:37.019240] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.677 [2024-11-26 20:21:37.019299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.677 [2024-11-26 20:21:37.019309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.677 [2024-11-26 20:21:37.019335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.677 [2024-11-26 20:21:37.046785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.677 BaseBdev1 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.677 [ 00:08:43.677 { 00:08:43.677 "name": "BaseBdev1", 00:08:43.677 "aliases": [ 00:08:43.677 "13f97a40-9273-4083-91d2-3044afbb34bf" 00:08:43.677 ], 00:08:43.677 "product_name": "Malloc disk", 00:08:43.677 "block_size": 512, 00:08:43.677 "num_blocks": 65536, 00:08:43.677 "uuid": "13f97a40-9273-4083-91d2-3044afbb34bf", 00:08:43.677 "assigned_rate_limits": { 00:08:43.677 "rw_ios_per_sec": 0, 00:08:43.677 "rw_mbytes_per_sec": 0, 00:08:43.677 "r_mbytes_per_sec": 0, 00:08:43.677 "w_mbytes_per_sec": 0 00:08:43.677 }, 00:08:43.677 "claimed": true, 00:08:43.677 "claim_type": "exclusive_write", 00:08:43.677 "zoned": false, 00:08:43.677 "supported_io_types": { 00:08:43.677 "read": true, 00:08:43.677 "write": true, 00:08:43.677 "unmap": true, 00:08:43.677 "flush": true, 00:08:43.677 "reset": true, 00:08:43.677 "nvme_admin": false, 00:08:43.677 "nvme_io": false, 00:08:43.677 "nvme_io_md": false, 00:08:43.677 "write_zeroes": true, 00:08:43.677 "zcopy": true, 00:08:43.677 "get_zone_info": false, 00:08:43.677 "zone_management": false, 00:08:43.677 "zone_append": false, 00:08:43.677 "compare": false, 00:08:43.677 "compare_and_write": false, 00:08:43.677 "abort": true, 00:08:43.677 "seek_hole": false, 00:08:43.677 "seek_data": false, 00:08:43.677 "copy": true, 00:08:43.677 "nvme_iov_md": false 00:08:43.677 }, 00:08:43.677 "memory_domains": [ 00:08:43.677 { 00:08:43.677 "dma_device_id": "system", 00:08:43.677 "dma_device_type": 1 00:08:43.677 }, 00:08:43.677 { 00:08:43.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.677 "dma_device_type": 2 00:08:43.677 } 00:08:43.677 ], 00:08:43.677 "driver_specific": {} 00:08:43.677 } 00:08:43.677 ] 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.677 "name": "Existed_Raid", 00:08:43.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.677 "strip_size_kb": 0, 00:08:43.677 "state": "configuring", 00:08:43.677 "raid_level": "raid1", 00:08:43.677 "superblock": false, 00:08:43.677 "num_base_bdevs": 2, 00:08:43.677 "num_base_bdevs_discovered": 1, 00:08:43.677 "num_base_bdevs_operational": 2, 00:08:43.677 "base_bdevs_list": [ 00:08:43.677 { 00:08:43.677 "name": "BaseBdev1", 00:08:43.677 "uuid": "13f97a40-9273-4083-91d2-3044afbb34bf", 00:08:43.677 "is_configured": true, 00:08:43.677 "data_offset": 0, 00:08:43.677 "data_size": 65536 00:08:43.677 }, 00:08:43.677 { 00:08:43.677 "name": "BaseBdev2", 00:08:43.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.677 "is_configured": false, 00:08:43.677 "data_offset": 0, 00:08:43.677 "data_size": 0 00:08:43.677 } 00:08:43.677 ] 00:08:43.677 }' 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.677 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 [2024-11-26 20:21:37.573988] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.248 [2024-11-26 20:21:37.574061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 [2024-11-26 20:21:37.586001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.248 [2024-11-26 20:21:37.588194] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.248 [2024-11-26 20:21:37.588246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.248 "name": "Existed_Raid", 00:08:44.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.248 "strip_size_kb": 0, 00:08:44.248 "state": "configuring", 00:08:44.248 "raid_level": "raid1", 00:08:44.248 "superblock": false, 00:08:44.248 "num_base_bdevs": 2, 00:08:44.248 "num_base_bdevs_discovered": 1, 00:08:44.248 "num_base_bdevs_operational": 2, 00:08:44.248 "base_bdevs_list": [ 00:08:44.248 { 00:08:44.248 "name": "BaseBdev1", 00:08:44.248 "uuid": "13f97a40-9273-4083-91d2-3044afbb34bf", 00:08:44.248 "is_configured": true, 00:08:44.248 "data_offset": 0, 00:08:44.248 "data_size": 65536 00:08:44.248 }, 00:08:44.248 { 00:08:44.248 "name": "BaseBdev2", 00:08:44.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.248 "is_configured": false, 00:08:44.248 "data_offset": 0, 00:08:44.248 "data_size": 0 00:08:44.248 } 00:08:44.248 ] 00:08:44.248 }' 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.248 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 20:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:44.508 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.508 20:21:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.508 [2024-11-26 20:21:38.020840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.508 [2024-11-26 20:21:38.020901] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:44.508 [2024-11-26 20:21:38.020925] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:44.508 [2024-11-26 20:21:38.021255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:44.508 [2024-11-26 20:21:38.021452] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:44.508 [2024-11-26 20:21:38.021478] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:44.508 [2024-11-26 20:21:38.021758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.508 BaseBdev2 00:08:44.508 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.508 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:44.508 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:44.508 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.508 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:44.508 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.508 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.508 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.509 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.509 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.509 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.509 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:44.509 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.509 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.509 [ 00:08:44.509 { 00:08:44.509 "name": "BaseBdev2", 00:08:44.509 "aliases": [ 00:08:44.509 "04d0bebb-e656-4028-9b52-da5d05825e3f" 00:08:44.509 ], 00:08:44.509 "product_name": "Malloc disk", 00:08:44.509 "block_size": 512, 00:08:44.509 "num_blocks": 65536, 00:08:44.509 "uuid": "04d0bebb-e656-4028-9b52-da5d05825e3f", 00:08:44.509 "assigned_rate_limits": { 00:08:44.509 "rw_ios_per_sec": 0, 00:08:44.509 "rw_mbytes_per_sec": 0, 00:08:44.509 "r_mbytes_per_sec": 0, 00:08:44.509 "w_mbytes_per_sec": 0 00:08:44.509 }, 00:08:44.509 "claimed": true, 00:08:44.509 "claim_type": "exclusive_write", 00:08:44.509 "zoned": false, 00:08:44.509 "supported_io_types": { 00:08:44.509 "read": true, 00:08:44.509 "write": true, 00:08:44.509 "unmap": true, 00:08:44.509 "flush": true, 00:08:44.509 "reset": true, 00:08:44.509 "nvme_admin": false, 00:08:44.509 "nvme_io": false, 00:08:44.509 "nvme_io_md": false, 00:08:44.509 "write_zeroes": true, 00:08:44.509 "zcopy": true, 00:08:44.509 "get_zone_info": false, 00:08:44.509 "zone_management": false, 00:08:44.509 "zone_append": false, 00:08:44.509 "compare": false, 00:08:44.509 "compare_and_write": false, 00:08:44.509 "abort": true, 00:08:44.509 "seek_hole": false, 00:08:44.509 "seek_data": false, 00:08:44.509 "copy": true, 00:08:44.509 "nvme_iov_md": false 00:08:44.509 }, 00:08:44.509 "memory_domains": [ 00:08:44.509 { 00:08:44.509 "dma_device_id": "system", 00:08:44.509 "dma_device_type": 1 00:08:44.509 }, 00:08:44.509 { 00:08:44.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.768 "dma_device_type": 2 00:08:44.768 } 00:08:44.768 ], 00:08:44.768 "driver_specific": {} 00:08:44.768 } 00:08:44.768 ] 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.768 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.768 "name": "Existed_Raid", 00:08:44.768 "uuid": "51378f23-034a-4f5f-ac1b-956476c955d5", 00:08:44.768 "strip_size_kb": 0, 00:08:44.768 "state": "online", 00:08:44.768 "raid_level": "raid1", 00:08:44.768 "superblock": false, 00:08:44.768 "num_base_bdevs": 2, 00:08:44.768 "num_base_bdevs_discovered": 2, 00:08:44.768 "num_base_bdevs_operational": 2, 00:08:44.768 "base_bdevs_list": [ 00:08:44.768 { 00:08:44.768 "name": "BaseBdev1", 00:08:44.768 "uuid": "13f97a40-9273-4083-91d2-3044afbb34bf", 00:08:44.768 "is_configured": true, 00:08:44.768 "data_offset": 0, 00:08:44.768 "data_size": 65536 00:08:44.768 }, 00:08:44.768 { 00:08:44.768 "name": "BaseBdev2", 00:08:44.768 "uuid": "04d0bebb-e656-4028-9b52-da5d05825e3f", 00:08:44.768 "is_configured": true, 00:08:44.768 "data_offset": 0, 00:08:44.768 "data_size": 65536 00:08:44.768 } 00:08:44.769 ] 00:08:44.769 }' 00:08:44.769 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.769 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.028 [2024-11-26 20:21:38.540418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.028 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.288 "name": "Existed_Raid", 00:08:45.288 "aliases": [ 00:08:45.288 "51378f23-034a-4f5f-ac1b-956476c955d5" 00:08:45.288 ], 00:08:45.288 "product_name": "Raid Volume", 00:08:45.288 "block_size": 512, 00:08:45.288 "num_blocks": 65536, 00:08:45.288 "uuid": "51378f23-034a-4f5f-ac1b-956476c955d5", 00:08:45.288 "assigned_rate_limits": { 00:08:45.288 "rw_ios_per_sec": 0, 00:08:45.288 "rw_mbytes_per_sec": 0, 00:08:45.288 "r_mbytes_per_sec": 0, 00:08:45.288 "w_mbytes_per_sec": 0 00:08:45.288 }, 00:08:45.288 "claimed": false, 00:08:45.288 "zoned": false, 00:08:45.288 "supported_io_types": { 00:08:45.288 "read": true, 00:08:45.288 "write": true, 00:08:45.288 "unmap": false, 00:08:45.288 "flush": false, 00:08:45.288 "reset": true, 00:08:45.288 "nvme_admin": false, 00:08:45.288 "nvme_io": false, 00:08:45.288 "nvme_io_md": false, 00:08:45.288 "write_zeroes": true, 00:08:45.288 "zcopy": false, 00:08:45.288 "get_zone_info": false, 00:08:45.288 "zone_management": false, 00:08:45.288 "zone_append": false, 00:08:45.288 "compare": false, 00:08:45.288 "compare_and_write": false, 00:08:45.288 "abort": false, 00:08:45.288 "seek_hole": false, 00:08:45.288 "seek_data": false, 00:08:45.288 "copy": false, 00:08:45.288 "nvme_iov_md": false 00:08:45.288 }, 00:08:45.288 "memory_domains": [ 00:08:45.288 { 00:08:45.288 "dma_device_id": "system", 00:08:45.288 "dma_device_type": 1 00:08:45.288 }, 00:08:45.288 { 00:08:45.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.288 "dma_device_type": 2 00:08:45.288 }, 00:08:45.288 { 00:08:45.288 "dma_device_id": "system", 00:08:45.288 "dma_device_type": 1 00:08:45.288 }, 00:08:45.288 { 00:08:45.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.288 "dma_device_type": 2 00:08:45.288 } 00:08:45.288 ], 00:08:45.288 "driver_specific": { 00:08:45.288 "raid": { 00:08:45.288 "uuid": "51378f23-034a-4f5f-ac1b-956476c955d5", 00:08:45.288 "strip_size_kb": 0, 00:08:45.288 "state": "online", 00:08:45.288 "raid_level": "raid1", 00:08:45.288 "superblock": false, 00:08:45.288 "num_base_bdevs": 2, 00:08:45.288 "num_base_bdevs_discovered": 2, 00:08:45.288 "num_base_bdevs_operational": 2, 00:08:45.288 "base_bdevs_list": [ 00:08:45.288 { 00:08:45.288 "name": "BaseBdev1", 00:08:45.288 "uuid": "13f97a40-9273-4083-91d2-3044afbb34bf", 00:08:45.288 "is_configured": true, 00:08:45.288 "data_offset": 0, 00:08:45.288 "data_size": 65536 00:08:45.288 }, 00:08:45.288 { 00:08:45.288 "name": "BaseBdev2", 00:08:45.288 "uuid": "04d0bebb-e656-4028-9b52-da5d05825e3f", 00:08:45.288 "is_configured": true, 00:08:45.288 "data_offset": 0, 00:08:45.288 "data_size": 65536 00:08:45.288 } 00:08:45.288 ] 00:08:45.288 } 00:08:45.288 } 00:08:45.288 }' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:45.288 BaseBdev2' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.288 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.289 [2024-11-26 20:21:38.771798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.289 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.547 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.547 "name": "Existed_Raid", 00:08:45.547 "uuid": "51378f23-034a-4f5f-ac1b-956476c955d5", 00:08:45.547 "strip_size_kb": 0, 00:08:45.547 "state": "online", 00:08:45.547 "raid_level": "raid1", 00:08:45.547 "superblock": false, 00:08:45.547 "num_base_bdevs": 2, 00:08:45.547 "num_base_bdevs_discovered": 1, 00:08:45.547 "num_base_bdevs_operational": 1, 00:08:45.547 "base_bdevs_list": [ 00:08:45.547 { 00:08:45.547 "name": null, 00:08:45.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.547 "is_configured": false, 00:08:45.547 "data_offset": 0, 00:08:45.547 "data_size": 65536 00:08:45.547 }, 00:08:45.547 { 00:08:45.547 "name": "BaseBdev2", 00:08:45.547 "uuid": "04d0bebb-e656-4028-9b52-da5d05825e3f", 00:08:45.547 "is_configured": true, 00:08:45.547 "data_offset": 0, 00:08:45.547 "data_size": 65536 00:08:45.547 } 00:08:45.547 ] 00:08:45.547 }' 00:08:45.547 20:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.547 20:21:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.810 [2024-11-26 20:21:39.292786] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.810 [2024-11-26 20:21:39.292899] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.810 [2024-11-26 20:21:39.314954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.810 [2024-11-26 20:21:39.315008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.810 [2024-11-26 20:21:39.315021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.810 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74364 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74364 ']' 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74364 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74364 00:08:46.072 killing process with pid 74364 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74364' 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74364 00:08:46.072 [2024-11-26 20:21:39.408901] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.072 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74364 00:08:46.072 [2024-11-26 20:21:39.410584] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:46.330 00:08:46.330 real 0m4.260s 00:08:46.330 user 0m6.534s 00:08:46.330 sys 0m0.914s 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.330 ************************************ 00:08:46.330 END TEST raid_state_function_test 00:08:46.330 ************************************ 00:08:46.330 20:21:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:46.330 20:21:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:46.330 20:21:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.330 20:21:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.330 ************************************ 00:08:46.330 START TEST raid_state_function_test_sb 00:08:46.330 ************************************ 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74606 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74606' 00:08:46.330 Process raid pid: 74606 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74606 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74606 ']' 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.330 20:21:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.588 [2024-11-26 20:21:39.963463] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:46.588 [2024-11-26 20:21:39.963645] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.588 [2024-11-26 20:21:40.115596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.847 [2024-11-26 20:21:40.204522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.847 [2024-11-26 20:21:40.280774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.847 [2024-11-26 20:21:40.280810] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.414 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.415 [2024-11-26 20:21:40.876839] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.415 [2024-11-26 20:21:40.876915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.415 [2024-11-26 20:21:40.876939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.415 [2024-11-26 20:21:40.876952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.415 "name": "Existed_Raid", 00:08:47.415 "uuid": "614aeba8-7955-4793-9ae1-0becec7ea5ab", 00:08:47.415 "strip_size_kb": 0, 00:08:47.415 "state": "configuring", 00:08:47.415 "raid_level": "raid1", 00:08:47.415 "superblock": true, 00:08:47.415 "num_base_bdevs": 2, 00:08:47.415 "num_base_bdevs_discovered": 0, 00:08:47.415 "num_base_bdevs_operational": 2, 00:08:47.415 "base_bdevs_list": [ 00:08:47.415 { 00:08:47.415 "name": "BaseBdev1", 00:08:47.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.415 "is_configured": false, 00:08:47.415 "data_offset": 0, 00:08:47.415 "data_size": 0 00:08:47.415 }, 00:08:47.415 { 00:08:47.415 "name": "BaseBdev2", 00:08:47.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.415 "is_configured": false, 00:08:47.415 "data_offset": 0, 00:08:47.415 "data_size": 0 00:08:47.415 } 00:08:47.415 ] 00:08:47.415 }' 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.415 20:21:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.983 [2024-11-26 20:21:41.360002] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.983 [2024-11-26 20:21:41.360060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.983 [2024-11-26 20:21:41.368086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.983 [2024-11-26 20:21:41.368145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.983 [2024-11-26 20:21:41.368156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.983 [2024-11-26 20:21:41.368168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.983 [2024-11-26 20:21:41.396308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.983 BaseBdev1 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.983 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.983 [ 00:08:47.983 { 00:08:47.983 "name": "BaseBdev1", 00:08:47.983 "aliases": [ 00:08:47.983 "4792e161-863a-4aa1-a7db-98b16f6151ab" 00:08:47.983 ], 00:08:47.983 "product_name": "Malloc disk", 00:08:47.983 "block_size": 512, 00:08:47.983 "num_blocks": 65536, 00:08:47.983 "uuid": "4792e161-863a-4aa1-a7db-98b16f6151ab", 00:08:47.983 "assigned_rate_limits": { 00:08:47.983 "rw_ios_per_sec": 0, 00:08:47.983 "rw_mbytes_per_sec": 0, 00:08:47.983 "r_mbytes_per_sec": 0, 00:08:47.983 "w_mbytes_per_sec": 0 00:08:47.983 }, 00:08:47.983 "claimed": true, 00:08:47.983 "claim_type": "exclusive_write", 00:08:47.983 "zoned": false, 00:08:47.983 "supported_io_types": { 00:08:47.984 "read": true, 00:08:47.984 "write": true, 00:08:47.984 "unmap": true, 00:08:47.984 "flush": true, 00:08:47.984 "reset": true, 00:08:47.984 "nvme_admin": false, 00:08:47.984 "nvme_io": false, 00:08:47.984 "nvme_io_md": false, 00:08:47.984 "write_zeroes": true, 00:08:47.984 "zcopy": true, 00:08:47.984 "get_zone_info": false, 00:08:47.984 "zone_management": false, 00:08:47.984 "zone_append": false, 00:08:47.984 "compare": false, 00:08:47.984 "compare_and_write": false, 00:08:47.984 "abort": true, 00:08:47.984 "seek_hole": false, 00:08:47.984 "seek_data": false, 00:08:47.984 "copy": true, 00:08:47.984 "nvme_iov_md": false 00:08:47.984 }, 00:08:47.984 "memory_domains": [ 00:08:47.984 { 00:08:47.984 "dma_device_id": "system", 00:08:47.984 "dma_device_type": 1 00:08:47.984 }, 00:08:47.984 { 00:08:47.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.984 "dma_device_type": 2 00:08:47.984 } 00:08:47.984 ], 00:08:47.984 "driver_specific": {} 00:08:47.984 } 00:08:47.984 ] 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.984 "name": "Existed_Raid", 00:08:47.984 "uuid": "9608aa90-6ce1-42d9-b629-b2ce9aca5aa0", 00:08:47.984 "strip_size_kb": 0, 00:08:47.984 "state": "configuring", 00:08:47.984 "raid_level": "raid1", 00:08:47.984 "superblock": true, 00:08:47.984 "num_base_bdevs": 2, 00:08:47.984 "num_base_bdevs_discovered": 1, 00:08:47.984 "num_base_bdevs_operational": 2, 00:08:47.984 "base_bdevs_list": [ 00:08:47.984 { 00:08:47.984 "name": "BaseBdev1", 00:08:47.984 "uuid": "4792e161-863a-4aa1-a7db-98b16f6151ab", 00:08:47.984 "is_configured": true, 00:08:47.984 "data_offset": 2048, 00:08:47.984 "data_size": 63488 00:08:47.984 }, 00:08:47.984 { 00:08:47.984 "name": "BaseBdev2", 00:08:47.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.984 "is_configured": false, 00:08:47.984 "data_offset": 0, 00:08:47.984 "data_size": 0 00:08:47.984 } 00:08:47.984 ] 00:08:47.984 }' 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.984 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.552 [2024-11-26 20:21:41.895690] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.552 [2024-11-26 20:21:41.895775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.552 [2024-11-26 20:21:41.903740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.552 [2024-11-26 20:21:41.905983] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.552 [2024-11-26 20:21:41.906034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.552 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.553 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.553 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.553 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.553 "name": "Existed_Raid", 00:08:48.553 "uuid": "e564d1d1-2b6c-4c14-a91d-0ad3db0cc6a5", 00:08:48.553 "strip_size_kb": 0, 00:08:48.553 "state": "configuring", 00:08:48.553 "raid_level": "raid1", 00:08:48.553 "superblock": true, 00:08:48.553 "num_base_bdevs": 2, 00:08:48.553 "num_base_bdevs_discovered": 1, 00:08:48.553 "num_base_bdevs_operational": 2, 00:08:48.553 "base_bdevs_list": [ 00:08:48.553 { 00:08:48.553 "name": "BaseBdev1", 00:08:48.553 "uuid": "4792e161-863a-4aa1-a7db-98b16f6151ab", 00:08:48.553 "is_configured": true, 00:08:48.553 "data_offset": 2048, 00:08:48.553 "data_size": 63488 00:08:48.553 }, 00:08:48.553 { 00:08:48.553 "name": "BaseBdev2", 00:08:48.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.553 "is_configured": false, 00:08:48.553 "data_offset": 0, 00:08:48.553 "data_size": 0 00:08:48.553 } 00:08:48.553 ] 00:08:48.553 }' 00:08:48.553 20:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.553 20:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.120 [2024-11-26 20:21:42.420595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.120 [2024-11-26 20:21:42.420896] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:49.120 [2024-11-26 20:21:42.420917] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:49.120 [2024-11-26 20:21:42.421317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:49.120 [2024-11-26 20:21:42.421517] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:49.120 [2024-11-26 20:21:42.421551] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:49.120 BaseBdev2 00:08:49.120 [2024-11-26 20:21:42.421766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:49.120 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.121 [ 00:08:49.121 { 00:08:49.121 "name": "BaseBdev2", 00:08:49.121 "aliases": [ 00:08:49.121 "784beede-b0d8-4ae7-a05c-29a72d498d37" 00:08:49.121 ], 00:08:49.121 "product_name": "Malloc disk", 00:08:49.121 "block_size": 512, 00:08:49.121 "num_blocks": 65536, 00:08:49.121 "uuid": "784beede-b0d8-4ae7-a05c-29a72d498d37", 00:08:49.121 "assigned_rate_limits": { 00:08:49.121 "rw_ios_per_sec": 0, 00:08:49.121 "rw_mbytes_per_sec": 0, 00:08:49.121 "r_mbytes_per_sec": 0, 00:08:49.121 "w_mbytes_per_sec": 0 00:08:49.121 }, 00:08:49.121 "claimed": true, 00:08:49.121 "claim_type": "exclusive_write", 00:08:49.121 "zoned": false, 00:08:49.121 "supported_io_types": { 00:08:49.121 "read": true, 00:08:49.121 "write": true, 00:08:49.121 "unmap": true, 00:08:49.121 "flush": true, 00:08:49.121 "reset": true, 00:08:49.121 "nvme_admin": false, 00:08:49.121 "nvme_io": false, 00:08:49.121 "nvme_io_md": false, 00:08:49.121 "write_zeroes": true, 00:08:49.121 "zcopy": true, 00:08:49.121 "get_zone_info": false, 00:08:49.121 "zone_management": false, 00:08:49.121 "zone_append": false, 00:08:49.121 "compare": false, 00:08:49.121 "compare_and_write": false, 00:08:49.121 "abort": true, 00:08:49.121 "seek_hole": false, 00:08:49.121 "seek_data": false, 00:08:49.121 "copy": true, 00:08:49.121 "nvme_iov_md": false 00:08:49.121 }, 00:08:49.121 "memory_domains": [ 00:08:49.121 { 00:08:49.121 "dma_device_id": "system", 00:08:49.121 "dma_device_type": 1 00:08:49.121 }, 00:08:49.121 { 00:08:49.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.121 "dma_device_type": 2 00:08:49.121 } 00:08:49.121 ], 00:08:49.121 "driver_specific": {} 00:08:49.121 } 00:08:49.121 ] 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.121 "name": "Existed_Raid", 00:08:49.121 "uuid": "e564d1d1-2b6c-4c14-a91d-0ad3db0cc6a5", 00:08:49.121 "strip_size_kb": 0, 00:08:49.121 "state": "online", 00:08:49.121 "raid_level": "raid1", 00:08:49.121 "superblock": true, 00:08:49.121 "num_base_bdevs": 2, 00:08:49.121 "num_base_bdevs_discovered": 2, 00:08:49.121 "num_base_bdevs_operational": 2, 00:08:49.121 "base_bdevs_list": [ 00:08:49.121 { 00:08:49.121 "name": "BaseBdev1", 00:08:49.121 "uuid": "4792e161-863a-4aa1-a7db-98b16f6151ab", 00:08:49.121 "is_configured": true, 00:08:49.121 "data_offset": 2048, 00:08:49.121 "data_size": 63488 00:08:49.121 }, 00:08:49.121 { 00:08:49.121 "name": "BaseBdev2", 00:08:49.121 "uuid": "784beede-b0d8-4ae7-a05c-29a72d498d37", 00:08:49.121 "is_configured": true, 00:08:49.121 "data_offset": 2048, 00:08:49.121 "data_size": 63488 00:08:49.121 } 00:08:49.121 ] 00:08:49.121 }' 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.121 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.380 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.380 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.380 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.381 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.381 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.381 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.381 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.381 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.381 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.381 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.381 [2024-11-26 20:21:42.912314] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.640 20:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.640 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.640 "name": "Existed_Raid", 00:08:49.640 "aliases": [ 00:08:49.640 "e564d1d1-2b6c-4c14-a91d-0ad3db0cc6a5" 00:08:49.640 ], 00:08:49.640 "product_name": "Raid Volume", 00:08:49.640 "block_size": 512, 00:08:49.640 "num_blocks": 63488, 00:08:49.640 "uuid": "e564d1d1-2b6c-4c14-a91d-0ad3db0cc6a5", 00:08:49.640 "assigned_rate_limits": { 00:08:49.640 "rw_ios_per_sec": 0, 00:08:49.640 "rw_mbytes_per_sec": 0, 00:08:49.640 "r_mbytes_per_sec": 0, 00:08:49.640 "w_mbytes_per_sec": 0 00:08:49.640 }, 00:08:49.640 "claimed": false, 00:08:49.640 "zoned": false, 00:08:49.640 "supported_io_types": { 00:08:49.640 "read": true, 00:08:49.640 "write": true, 00:08:49.640 "unmap": false, 00:08:49.640 "flush": false, 00:08:49.640 "reset": true, 00:08:49.640 "nvme_admin": false, 00:08:49.640 "nvme_io": false, 00:08:49.640 "nvme_io_md": false, 00:08:49.640 "write_zeroes": true, 00:08:49.640 "zcopy": false, 00:08:49.640 "get_zone_info": false, 00:08:49.640 "zone_management": false, 00:08:49.640 "zone_append": false, 00:08:49.640 "compare": false, 00:08:49.640 "compare_and_write": false, 00:08:49.640 "abort": false, 00:08:49.640 "seek_hole": false, 00:08:49.640 "seek_data": false, 00:08:49.640 "copy": false, 00:08:49.640 "nvme_iov_md": false 00:08:49.640 }, 00:08:49.640 "memory_domains": [ 00:08:49.640 { 00:08:49.640 "dma_device_id": "system", 00:08:49.640 "dma_device_type": 1 00:08:49.640 }, 00:08:49.640 { 00:08:49.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.640 "dma_device_type": 2 00:08:49.640 }, 00:08:49.640 { 00:08:49.640 "dma_device_id": "system", 00:08:49.640 "dma_device_type": 1 00:08:49.640 }, 00:08:49.640 { 00:08:49.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.641 "dma_device_type": 2 00:08:49.641 } 00:08:49.641 ], 00:08:49.641 "driver_specific": { 00:08:49.641 "raid": { 00:08:49.641 "uuid": "e564d1d1-2b6c-4c14-a91d-0ad3db0cc6a5", 00:08:49.641 "strip_size_kb": 0, 00:08:49.641 "state": "online", 00:08:49.641 "raid_level": "raid1", 00:08:49.641 "superblock": true, 00:08:49.641 "num_base_bdevs": 2, 00:08:49.641 "num_base_bdevs_discovered": 2, 00:08:49.641 "num_base_bdevs_operational": 2, 00:08:49.641 "base_bdevs_list": [ 00:08:49.641 { 00:08:49.641 "name": "BaseBdev1", 00:08:49.641 "uuid": "4792e161-863a-4aa1-a7db-98b16f6151ab", 00:08:49.641 "is_configured": true, 00:08:49.641 "data_offset": 2048, 00:08:49.641 "data_size": 63488 00:08:49.641 }, 00:08:49.641 { 00:08:49.641 "name": "BaseBdev2", 00:08:49.641 "uuid": "784beede-b0d8-4ae7-a05c-29a72d498d37", 00:08:49.641 "is_configured": true, 00:08:49.641 "data_offset": 2048, 00:08:49.641 "data_size": 63488 00:08:49.641 } 00:08:49.641 ] 00:08:49.641 } 00:08:49.641 } 00:08:49.641 }' 00:08:49.641 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.641 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.641 BaseBdev2' 00:08:49.641 20:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.641 [2024-11-26 20:21:43.115692] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.641 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.900 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.900 "name": "Existed_Raid", 00:08:49.900 "uuid": "e564d1d1-2b6c-4c14-a91d-0ad3db0cc6a5", 00:08:49.900 "strip_size_kb": 0, 00:08:49.900 "state": "online", 00:08:49.900 "raid_level": "raid1", 00:08:49.900 "superblock": true, 00:08:49.900 "num_base_bdevs": 2, 00:08:49.900 "num_base_bdevs_discovered": 1, 00:08:49.900 "num_base_bdevs_operational": 1, 00:08:49.900 "base_bdevs_list": [ 00:08:49.900 { 00:08:49.900 "name": null, 00:08:49.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.900 "is_configured": false, 00:08:49.900 "data_offset": 0, 00:08:49.900 "data_size": 63488 00:08:49.900 }, 00:08:49.900 { 00:08:49.900 "name": "BaseBdev2", 00:08:49.900 "uuid": "784beede-b0d8-4ae7-a05c-29a72d498d37", 00:08:49.900 "is_configured": true, 00:08:49.900 "data_offset": 2048, 00:08:49.900 "data_size": 63488 00:08:49.900 } 00:08:49.900 ] 00:08:49.900 }' 00:08:49.900 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.900 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [2024-11-26 20:21:43.641218] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.167 [2024-11-26 20:21:43.641362] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.167 [2024-11-26 20:21:43.663384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.167 [2024-11-26 20:21:43.663442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.167 [2024-11-26 20:21:43.663454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74606 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74606 ']' 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74606 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74606 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.453 killing process with pid 74606 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74606' 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74606 00:08:50.453 20:21:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74606 00:08:50.453 [2024-11-26 20:21:43.762303] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.453 [2024-11-26 20:21:43.763968] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.713 20:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:50.713 00:08:50.713 real 0m4.290s 00:08:50.713 user 0m6.596s 00:08:50.713 sys 0m0.937s 00:08:50.713 20:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.713 20:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.713 ************************************ 00:08:50.713 END TEST raid_state_function_test_sb 00:08:50.713 ************************************ 00:08:50.713 20:21:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:50.713 20:21:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:50.713 20:21:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.713 20:21:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.713 ************************************ 00:08:50.713 START TEST raid_superblock_test 00:08:50.713 ************************************ 00:08:50.713 20:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:50.713 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74847 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74847 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74847 ']' 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.714 20:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.972 [2024-11-26 20:21:44.306812] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:50.972 [2024-11-26 20:21:44.307027] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74847 ] 00:08:50.972 [2024-11-26 20:21:44.482540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.231 [2024-11-26 20:21:44.566103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.231 [2024-11-26 20:21:44.640833] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.231 [2024-11-26 20:21:44.640880] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.799 malloc1 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.799 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.799 [2024-11-26 20:21:45.217791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.799 [2024-11-26 20:21:45.217885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.799 [2024-11-26 20:21:45.217923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:51.799 [2024-11-26 20:21:45.217945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.799 [2024-11-26 20:21:45.220778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.800 [2024-11-26 20:21:45.220837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.800 pt1 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.800 malloc2 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.800 [2024-11-26 20:21:45.262754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.800 [2024-11-26 20:21:45.262833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.800 [2024-11-26 20:21:45.262856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:51.800 [2024-11-26 20:21:45.262882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.800 [2024-11-26 20:21:45.265299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.800 [2024-11-26 20:21:45.265344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.800 pt2 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.800 [2024-11-26 20:21:45.274772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.800 [2024-11-26 20:21:45.276935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.800 [2024-11-26 20:21:45.277100] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:51.800 [2024-11-26 20:21:45.277117] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:51.800 [2024-11-26 20:21:45.277426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:51.800 [2024-11-26 20:21:45.277632] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:51.800 [2024-11-26 20:21:45.277648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:51.800 [2024-11-26 20:21:45.277811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.800 "name": "raid_bdev1", 00:08:51.800 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:51.800 "strip_size_kb": 0, 00:08:51.800 "state": "online", 00:08:51.800 "raid_level": "raid1", 00:08:51.800 "superblock": true, 00:08:51.800 "num_base_bdevs": 2, 00:08:51.800 "num_base_bdevs_discovered": 2, 00:08:51.800 "num_base_bdevs_operational": 2, 00:08:51.800 "base_bdevs_list": [ 00:08:51.800 { 00:08:51.800 "name": "pt1", 00:08:51.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.800 "is_configured": true, 00:08:51.800 "data_offset": 2048, 00:08:51.800 "data_size": 63488 00:08:51.800 }, 00:08:51.800 { 00:08:51.800 "name": "pt2", 00:08:51.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.800 "is_configured": true, 00:08:51.800 "data_offset": 2048, 00:08:51.800 "data_size": 63488 00:08:51.800 } 00:08:51.800 ] 00:08:51.800 }' 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.800 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.368 [2024-11-26 20:21:45.726273] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.368 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.368 "name": "raid_bdev1", 00:08:52.368 "aliases": [ 00:08:52.368 "2b8a9997-e3a4-46fc-baa3-087f9263e43e" 00:08:52.368 ], 00:08:52.368 "product_name": "Raid Volume", 00:08:52.368 "block_size": 512, 00:08:52.368 "num_blocks": 63488, 00:08:52.368 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:52.368 "assigned_rate_limits": { 00:08:52.368 "rw_ios_per_sec": 0, 00:08:52.368 "rw_mbytes_per_sec": 0, 00:08:52.368 "r_mbytes_per_sec": 0, 00:08:52.368 "w_mbytes_per_sec": 0 00:08:52.368 }, 00:08:52.368 "claimed": false, 00:08:52.368 "zoned": false, 00:08:52.368 "supported_io_types": { 00:08:52.368 "read": true, 00:08:52.368 "write": true, 00:08:52.368 "unmap": false, 00:08:52.368 "flush": false, 00:08:52.368 "reset": true, 00:08:52.368 "nvme_admin": false, 00:08:52.368 "nvme_io": false, 00:08:52.368 "nvme_io_md": false, 00:08:52.368 "write_zeroes": true, 00:08:52.368 "zcopy": false, 00:08:52.368 "get_zone_info": false, 00:08:52.368 "zone_management": false, 00:08:52.368 "zone_append": false, 00:08:52.368 "compare": false, 00:08:52.368 "compare_and_write": false, 00:08:52.368 "abort": false, 00:08:52.368 "seek_hole": false, 00:08:52.368 "seek_data": false, 00:08:52.368 "copy": false, 00:08:52.368 "nvme_iov_md": false 00:08:52.368 }, 00:08:52.368 "memory_domains": [ 00:08:52.368 { 00:08:52.368 "dma_device_id": "system", 00:08:52.368 "dma_device_type": 1 00:08:52.368 }, 00:08:52.368 { 00:08:52.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.368 "dma_device_type": 2 00:08:52.368 }, 00:08:52.368 { 00:08:52.368 "dma_device_id": "system", 00:08:52.368 "dma_device_type": 1 00:08:52.368 }, 00:08:52.368 { 00:08:52.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.368 "dma_device_type": 2 00:08:52.368 } 00:08:52.368 ], 00:08:52.368 "driver_specific": { 00:08:52.368 "raid": { 00:08:52.368 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:52.368 "strip_size_kb": 0, 00:08:52.368 "state": "online", 00:08:52.368 "raid_level": "raid1", 00:08:52.368 "superblock": true, 00:08:52.368 "num_base_bdevs": 2, 00:08:52.368 "num_base_bdevs_discovered": 2, 00:08:52.368 "num_base_bdevs_operational": 2, 00:08:52.368 "base_bdevs_list": [ 00:08:52.368 { 00:08:52.368 "name": "pt1", 00:08:52.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.368 "is_configured": true, 00:08:52.368 "data_offset": 2048, 00:08:52.368 "data_size": 63488 00:08:52.368 }, 00:08:52.368 { 00:08:52.368 "name": "pt2", 00:08:52.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.369 "is_configured": true, 00:08:52.369 "data_offset": 2048, 00:08:52.369 "data_size": 63488 00:08:52.369 } 00:08:52.369 ] 00:08:52.369 } 00:08:52.369 } 00:08:52.369 }' 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:52.369 pt2' 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.369 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 [2024-11-26 20:21:45.957863] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2b8a9997-e3a4-46fc-baa3-087f9263e43e 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2b8a9997-e3a4-46fc-baa3-087f9263e43e ']' 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.629 20:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 [2024-11-26 20:21:45.997492] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.629 [2024-11-26 20:21:45.997527] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.629 [2024-11-26 20:21:45.997611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.629 [2024-11-26 20:21:45.997699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.629 [2024-11-26 20:21:45.997716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 [2024-11-26 20:21:46.137335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:52.629 [2024-11-26 20:21:46.139440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:52.629 [2024-11-26 20:21:46.139537] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:52.629 [2024-11-26 20:21:46.139592] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:52.629 [2024-11-26 20:21:46.139610] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.629 [2024-11-26 20:21:46.139632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:52.629 request: 00:08:52.629 { 00:08:52.629 "name": "raid_bdev1", 00:08:52.629 "raid_level": "raid1", 00:08:52.629 "base_bdevs": [ 00:08:52.629 "malloc1", 00:08:52.629 "malloc2" 00:08:52.629 ], 00:08:52.629 "superblock": false, 00:08:52.629 "method": "bdev_raid_create", 00:08:52.629 "req_id": 1 00:08:52.629 } 00:08:52.629 Got JSON-RPC error response 00:08:52.629 response: 00:08:52.629 { 00:08:52.629 "code": -17, 00:08:52.629 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:52.629 } 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.629 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.890 [2024-11-26 20:21:46.189194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:52.890 [2024-11-26 20:21:46.189269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.890 [2024-11-26 20:21:46.189290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:52.890 [2024-11-26 20:21:46.189301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.890 [2024-11-26 20:21:46.191709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.890 [2024-11-26 20:21:46.191744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:52.890 [2024-11-26 20:21:46.191836] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:52.890 [2024-11-26 20:21:46.191880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:52.890 pt1 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.890 "name": "raid_bdev1", 00:08:52.890 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:52.890 "strip_size_kb": 0, 00:08:52.890 "state": "configuring", 00:08:52.890 "raid_level": "raid1", 00:08:52.890 "superblock": true, 00:08:52.890 "num_base_bdevs": 2, 00:08:52.890 "num_base_bdevs_discovered": 1, 00:08:52.890 "num_base_bdevs_operational": 2, 00:08:52.890 "base_bdevs_list": [ 00:08:52.890 { 00:08:52.890 "name": "pt1", 00:08:52.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.890 "is_configured": true, 00:08:52.890 "data_offset": 2048, 00:08:52.890 "data_size": 63488 00:08:52.890 }, 00:08:52.890 { 00:08:52.890 "name": null, 00:08:52.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.890 "is_configured": false, 00:08:52.890 "data_offset": 2048, 00:08:52.890 "data_size": 63488 00:08:52.890 } 00:08:52.890 ] 00:08:52.890 }' 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.890 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.150 [2024-11-26 20:21:46.660423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.150 [2024-11-26 20:21:46.660510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.150 [2024-11-26 20:21:46.660538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:53.150 [2024-11-26 20:21:46.660549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.150 [2024-11-26 20:21:46.661051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.150 [2024-11-26 20:21:46.661086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.150 [2024-11-26 20:21:46.661177] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.150 [2024-11-26 20:21:46.661207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.150 [2024-11-26 20:21:46.661314] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:53.150 [2024-11-26 20:21:46.661327] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.150 [2024-11-26 20:21:46.661595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:53.150 [2024-11-26 20:21:46.661756] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:53.150 [2024-11-26 20:21:46.661788] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:53.150 [2024-11-26 20:21:46.661909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.150 pt2 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.150 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.422 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.422 "name": "raid_bdev1", 00:08:53.422 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:53.422 "strip_size_kb": 0, 00:08:53.422 "state": "online", 00:08:53.422 "raid_level": "raid1", 00:08:53.422 "superblock": true, 00:08:53.422 "num_base_bdevs": 2, 00:08:53.422 "num_base_bdevs_discovered": 2, 00:08:53.422 "num_base_bdevs_operational": 2, 00:08:53.422 "base_bdevs_list": [ 00:08:53.422 { 00:08:53.422 "name": "pt1", 00:08:53.422 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.422 "is_configured": true, 00:08:53.422 "data_offset": 2048, 00:08:53.422 "data_size": 63488 00:08:53.422 }, 00:08:53.422 { 00:08:53.422 "name": "pt2", 00:08:53.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.422 "is_configured": true, 00:08:53.422 "data_offset": 2048, 00:08:53.422 "data_size": 63488 00:08:53.422 } 00:08:53.422 ] 00:08:53.422 }' 00:08:53.422 20:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.422 20:21:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.709 [2024-11-26 20:21:47.140006] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.709 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.709 "name": "raid_bdev1", 00:08:53.709 "aliases": [ 00:08:53.709 "2b8a9997-e3a4-46fc-baa3-087f9263e43e" 00:08:53.709 ], 00:08:53.709 "product_name": "Raid Volume", 00:08:53.709 "block_size": 512, 00:08:53.709 "num_blocks": 63488, 00:08:53.709 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:53.709 "assigned_rate_limits": { 00:08:53.709 "rw_ios_per_sec": 0, 00:08:53.709 "rw_mbytes_per_sec": 0, 00:08:53.709 "r_mbytes_per_sec": 0, 00:08:53.709 "w_mbytes_per_sec": 0 00:08:53.709 }, 00:08:53.709 "claimed": false, 00:08:53.709 "zoned": false, 00:08:53.709 "supported_io_types": { 00:08:53.709 "read": true, 00:08:53.709 "write": true, 00:08:53.709 "unmap": false, 00:08:53.709 "flush": false, 00:08:53.709 "reset": true, 00:08:53.709 "nvme_admin": false, 00:08:53.709 "nvme_io": false, 00:08:53.709 "nvme_io_md": false, 00:08:53.709 "write_zeroes": true, 00:08:53.709 "zcopy": false, 00:08:53.709 "get_zone_info": false, 00:08:53.709 "zone_management": false, 00:08:53.709 "zone_append": false, 00:08:53.709 "compare": false, 00:08:53.709 "compare_and_write": false, 00:08:53.710 "abort": false, 00:08:53.710 "seek_hole": false, 00:08:53.710 "seek_data": false, 00:08:53.710 "copy": false, 00:08:53.710 "nvme_iov_md": false 00:08:53.710 }, 00:08:53.710 "memory_domains": [ 00:08:53.710 { 00:08:53.710 "dma_device_id": "system", 00:08:53.710 "dma_device_type": 1 00:08:53.710 }, 00:08:53.710 { 00:08:53.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.710 "dma_device_type": 2 00:08:53.710 }, 00:08:53.710 { 00:08:53.710 "dma_device_id": "system", 00:08:53.710 "dma_device_type": 1 00:08:53.710 }, 00:08:53.710 { 00:08:53.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.710 "dma_device_type": 2 00:08:53.710 } 00:08:53.710 ], 00:08:53.710 "driver_specific": { 00:08:53.710 "raid": { 00:08:53.710 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:53.710 "strip_size_kb": 0, 00:08:53.710 "state": "online", 00:08:53.710 "raid_level": "raid1", 00:08:53.710 "superblock": true, 00:08:53.710 "num_base_bdevs": 2, 00:08:53.710 "num_base_bdevs_discovered": 2, 00:08:53.710 "num_base_bdevs_operational": 2, 00:08:53.710 "base_bdevs_list": [ 00:08:53.710 { 00:08:53.710 "name": "pt1", 00:08:53.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.710 "is_configured": true, 00:08:53.710 "data_offset": 2048, 00:08:53.710 "data_size": 63488 00:08:53.710 }, 00:08:53.710 { 00:08:53.710 "name": "pt2", 00:08:53.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.710 "is_configured": true, 00:08:53.710 "data_offset": 2048, 00:08:53.710 "data_size": 63488 00:08:53.710 } 00:08:53.710 ] 00:08:53.710 } 00:08:53.710 } 00:08:53.710 }' 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.710 pt2' 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.710 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:53.968 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.969 [2024-11-26 20:21:47.339676] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2b8a9997-e3a4-46fc-baa3-087f9263e43e '!=' 2b8a9997-e3a4-46fc-baa3-087f9263e43e ']' 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.969 [2024-11-26 20:21:47.387287] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.969 "name": "raid_bdev1", 00:08:53.969 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:53.969 "strip_size_kb": 0, 00:08:53.969 "state": "online", 00:08:53.969 "raid_level": "raid1", 00:08:53.969 "superblock": true, 00:08:53.969 "num_base_bdevs": 2, 00:08:53.969 "num_base_bdevs_discovered": 1, 00:08:53.969 "num_base_bdevs_operational": 1, 00:08:53.969 "base_bdevs_list": [ 00:08:53.969 { 00:08:53.969 "name": null, 00:08:53.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.969 "is_configured": false, 00:08:53.969 "data_offset": 0, 00:08:53.969 "data_size": 63488 00:08:53.969 }, 00:08:53.969 { 00:08:53.969 "name": "pt2", 00:08:53.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.969 "is_configured": true, 00:08:53.969 "data_offset": 2048, 00:08:53.969 "data_size": 63488 00:08:53.969 } 00:08:53.969 ] 00:08:53.969 }' 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.969 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.537 [2024-11-26 20:21:47.874413] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.537 [2024-11-26 20:21:47.874529] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.537 [2024-11-26 20:21:47.874672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.537 [2024-11-26 20:21:47.874760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.537 [2024-11-26 20:21:47.874811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.537 [2024-11-26 20:21:47.946292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.537 [2024-11-26 20:21:47.946438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.537 [2024-11-26 20:21:47.946480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:54.537 [2024-11-26 20:21:47.946517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.537 [2024-11-26 20:21:47.949013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.537 [2024-11-26 20:21:47.949112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.537 [2024-11-26 20:21:47.949256] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:54.537 [2024-11-26 20:21:47.949332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.537 [2024-11-26 20:21:47.949455] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:54.537 [2024-11-26 20:21:47.949498] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.537 [2024-11-26 20:21:47.949800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:54.537 [2024-11-26 20:21:47.949989] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:54.537 [2024-11-26 20:21:47.950041] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:54.537 [2024-11-26 20:21:47.950249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.537 pt2 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.537 20:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.537 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.537 "name": "raid_bdev1", 00:08:54.537 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:54.537 "strip_size_kb": 0, 00:08:54.537 "state": "online", 00:08:54.537 "raid_level": "raid1", 00:08:54.537 "superblock": true, 00:08:54.537 "num_base_bdevs": 2, 00:08:54.537 "num_base_bdevs_discovered": 1, 00:08:54.537 "num_base_bdevs_operational": 1, 00:08:54.537 "base_bdevs_list": [ 00:08:54.537 { 00:08:54.537 "name": null, 00:08:54.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.537 "is_configured": false, 00:08:54.537 "data_offset": 2048, 00:08:54.537 "data_size": 63488 00:08:54.537 }, 00:08:54.537 { 00:08:54.537 "name": "pt2", 00:08:54.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.537 "is_configured": true, 00:08:54.537 "data_offset": 2048, 00:08:54.537 "data_size": 63488 00:08:54.537 } 00:08:54.537 ] 00:08:54.537 }' 00:08:54.537 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.537 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.104 [2024-11-26 20:21:48.409638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.104 [2024-11-26 20:21:48.409751] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.104 [2024-11-26 20:21:48.409876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.104 [2024-11-26 20:21:48.409931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.104 [2024-11-26 20:21:48.409944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.104 [2024-11-26 20:21:48.473491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.104 [2024-11-26 20:21:48.473663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.104 [2024-11-26 20:21:48.473714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:55.104 [2024-11-26 20:21:48.473789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.104 [2024-11-26 20:21:48.476410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.104 [2024-11-26 20:21:48.476521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.104 [2024-11-26 20:21:48.476662] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:55.104 [2024-11-26 20:21:48.476751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.104 [2024-11-26 20:21:48.476921] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:55.104 [2024-11-26 20:21:48.476983] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.104 [2024-11-26 20:21:48.477019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:08:55.104 [2024-11-26 20:21:48.477069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.104 [2024-11-26 20:21:48.477156] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:55.104 [2024-11-26 20:21:48.477169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.104 [2024-11-26 20:21:48.477451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:55.104 [2024-11-26 20:21:48.477585] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:55.104 [2024-11-26 20:21:48.477597] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:55.104 [2024-11-26 20:21:48.477807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.104 pt1 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.104 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.104 "name": "raid_bdev1", 00:08:55.104 "uuid": "2b8a9997-e3a4-46fc-baa3-087f9263e43e", 00:08:55.104 "strip_size_kb": 0, 00:08:55.104 "state": "online", 00:08:55.104 "raid_level": "raid1", 00:08:55.104 "superblock": true, 00:08:55.104 "num_base_bdevs": 2, 00:08:55.104 "num_base_bdevs_discovered": 1, 00:08:55.104 "num_base_bdevs_operational": 1, 00:08:55.104 "base_bdevs_list": [ 00:08:55.104 { 00:08:55.104 "name": null, 00:08:55.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.104 "is_configured": false, 00:08:55.104 "data_offset": 2048, 00:08:55.104 "data_size": 63488 00:08:55.104 }, 00:08:55.104 { 00:08:55.104 "name": "pt2", 00:08:55.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.104 "is_configured": true, 00:08:55.104 "data_offset": 2048, 00:08:55.104 "data_size": 63488 00:08:55.104 } 00:08:55.105 ] 00:08:55.105 }' 00:08:55.105 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.105 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.670 [2024-11-26 20:21:48.977238] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.670 20:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2b8a9997-e3a4-46fc-baa3-087f9263e43e '!=' 2b8a9997-e3a4-46fc-baa3-087f9263e43e ']' 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74847 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74847 ']' 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74847 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74847 00:08:55.670 killing process with pid 74847 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74847' 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74847 00:08:55.670 [2024-11-26 20:21:49.063201] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.670 [2024-11-26 20:21:49.063304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.670 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74847 00:08:55.670 [2024-11-26 20:21:49.063362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.670 [2024-11-26 20:21:49.063373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:55.670 [2024-11-26 20:21:49.108391] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.928 20:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:55.928 00:08:55.928 real 0m5.253s 00:08:55.928 user 0m8.425s 00:08:55.928 sys 0m1.154s 00:08:55.928 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.928 20:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.928 ************************************ 00:08:55.928 END TEST raid_superblock_test 00:08:55.928 ************************************ 00:08:56.186 20:21:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:56.186 20:21:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:56.186 20:21:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.186 20:21:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.186 ************************************ 00:08:56.186 START TEST raid_read_error_test 00:08:56.186 ************************************ 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:56.186 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ScDb9spFIT 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75172 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75172 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75172 ']' 00:08:56.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.187 20:21:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.187 [2024-11-26 20:21:49.643729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:56.187 [2024-11-26 20:21:49.643872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75172 ] 00:08:56.444 [2024-11-26 20:21:49.803302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.444 [2024-11-26 20:21:49.884537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.444 [2024-11-26 20:21:49.953712] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.444 [2024-11-26 20:21:49.953750] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.012 BaseBdev1_malloc 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.012 true 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.012 [2024-11-26 20:21:50.548100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:57.012 [2024-11-26 20:21:50.548187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.012 [2024-11-26 20:21:50.548224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:57.012 [2024-11-26 20:21:50.548238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.012 [2024-11-26 20:21:50.551714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.012 [2024-11-26 20:21:50.551776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:57.012 BaseBdev1 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.012 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.273 BaseBdev2_malloc 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.273 true 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.273 [2024-11-26 20:21:50.600819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:57.273 [2024-11-26 20:21:50.600890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.273 [2024-11-26 20:21:50.600912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:57.273 [2024-11-26 20:21:50.600922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.273 [2024-11-26 20:21:50.603666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.273 [2024-11-26 20:21:50.603701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:57.273 BaseBdev2 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.273 [2024-11-26 20:21:50.612835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.273 [2024-11-26 20:21:50.615090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.273 [2024-11-26 20:21:50.615362] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:57.273 [2024-11-26 20:21:50.615389] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:57.273 [2024-11-26 20:21:50.615742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:57.273 [2024-11-26 20:21:50.615925] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:57.273 [2024-11-26 20:21:50.615958] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:57.273 [2024-11-26 20:21:50.616160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.273 "name": "raid_bdev1", 00:08:57.273 "uuid": "00ccb7e3-88b5-4640-a1c9-23decc7839b4", 00:08:57.273 "strip_size_kb": 0, 00:08:57.273 "state": "online", 00:08:57.273 "raid_level": "raid1", 00:08:57.273 "superblock": true, 00:08:57.273 "num_base_bdevs": 2, 00:08:57.273 "num_base_bdevs_discovered": 2, 00:08:57.273 "num_base_bdevs_operational": 2, 00:08:57.273 "base_bdevs_list": [ 00:08:57.273 { 00:08:57.273 "name": "BaseBdev1", 00:08:57.273 "uuid": "b7e24e26-ca60-526d-a4fb-d9daf5b5df8c", 00:08:57.273 "is_configured": true, 00:08:57.273 "data_offset": 2048, 00:08:57.273 "data_size": 63488 00:08:57.273 }, 00:08:57.273 { 00:08:57.273 "name": "BaseBdev2", 00:08:57.273 "uuid": "139f95d1-84f0-5645-bcab-7b8db4450b3b", 00:08:57.273 "is_configured": true, 00:08:57.273 "data_offset": 2048, 00:08:57.273 "data_size": 63488 00:08:57.273 } 00:08:57.273 ] 00:08:57.273 }' 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.273 20:21:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.534 20:21:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:57.534 20:21:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:57.791 [2024-11-26 20:21:51.144492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.725 "name": "raid_bdev1", 00:08:58.725 "uuid": "00ccb7e3-88b5-4640-a1c9-23decc7839b4", 00:08:58.725 "strip_size_kb": 0, 00:08:58.725 "state": "online", 00:08:58.725 "raid_level": "raid1", 00:08:58.725 "superblock": true, 00:08:58.725 "num_base_bdevs": 2, 00:08:58.725 "num_base_bdevs_discovered": 2, 00:08:58.725 "num_base_bdevs_operational": 2, 00:08:58.725 "base_bdevs_list": [ 00:08:58.725 { 00:08:58.725 "name": "BaseBdev1", 00:08:58.725 "uuid": "b7e24e26-ca60-526d-a4fb-d9daf5b5df8c", 00:08:58.725 "is_configured": true, 00:08:58.725 "data_offset": 2048, 00:08:58.725 "data_size": 63488 00:08:58.725 }, 00:08:58.725 { 00:08:58.725 "name": "BaseBdev2", 00:08:58.725 "uuid": "139f95d1-84f0-5645-bcab-7b8db4450b3b", 00:08:58.725 "is_configured": true, 00:08:58.725 "data_offset": 2048, 00:08:58.725 "data_size": 63488 00:08:58.725 } 00:08:58.725 ] 00:08:58.725 }' 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.725 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.293 [2024-11-26 20:21:52.568686] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.293 [2024-11-26 20:21:52.568782] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.293 [2024-11-26 20:21:52.571795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.293 [2024-11-26 20:21:52.571886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.293 [2024-11-26 20:21:52.572037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.293 [2024-11-26 20:21:52.572053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:59.293 { 00:08:59.293 "results": [ 00:08:59.293 { 00:08:59.293 "job": "raid_bdev1", 00:08:59.293 "core_mask": "0x1", 00:08:59.293 "workload": "randrw", 00:08:59.293 "percentage": 50, 00:08:59.293 "status": "finished", 00:08:59.293 "queue_depth": 1, 00:08:59.293 "io_size": 131072, 00:08:59.293 "runtime": 1.424932, 00:08:59.293 "iops": 12509.368868128444, 00:08:59.293 "mibps": 1563.6711085160555, 00:08:59.293 "io_failed": 0, 00:08:59.293 "io_timeout": 0, 00:08:59.293 "avg_latency_us": 76.92444951830325, 00:08:59.293 "min_latency_us": 23.923144104803495, 00:08:59.293 "max_latency_us": 1752.8733624454148 00:08:59.293 } 00:08:59.293 ], 00:08:59.293 "core_count": 1 00:08:59.293 } 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75172 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75172 ']' 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75172 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75172 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.293 killing process with pid 75172 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75172' 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75172 00:08:59.293 [2024-11-26 20:21:52.621692] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.293 20:21:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75172 00:08:59.293 [2024-11-26 20:21:52.652983] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ScDb9spFIT 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:59.552 ************************************ 00:08:59.552 END TEST raid_read_error_test 00:08:59.552 ************************************ 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:59.552 00:08:59.552 real 0m3.490s 00:08:59.552 user 0m4.372s 00:08:59.552 sys 0m0.622s 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.552 20:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.552 20:21:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:59.552 20:21:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:59.552 20:21:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.552 20:21:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.552 ************************************ 00:08:59.552 START TEST raid_write_error_test 00:08:59.552 ************************************ 00:08:59.552 20:21:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:59.552 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:59.552 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:59.552 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:59.552 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:59.552 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.147qpzN3QL 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75301 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75301 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75301 ']' 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.811 20:21:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.811 [2024-11-26 20:21:53.198564] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:59.811 [2024-11-26 20:21:53.198802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75301 ] 00:09:00.071 [2024-11-26 20:21:53.363559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.071 [2024-11-26 20:21:53.448364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.071 [2024-11-26 20:21:53.524506] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.071 [2024-11-26 20:21:53.524651] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.638 BaseBdev1_malloc 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.638 true 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.638 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.638 [2024-11-26 20:21:54.101336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:00.638 [2024-11-26 20:21:54.101401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.638 [2024-11-26 20:21:54.101437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:00.639 [2024-11-26 20:21:54.101452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.639 [2024-11-26 20:21:54.103922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.639 [2024-11-26 20:21:54.104029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:00.639 BaseBdev1 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.639 BaseBdev2_malloc 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.639 true 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.639 [2024-11-26 20:21:54.157127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:00.639 [2024-11-26 20:21:54.157196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.639 [2024-11-26 20:21:54.157226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:00.639 [2024-11-26 20:21:54.157239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.639 [2024-11-26 20:21:54.159554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.639 [2024-11-26 20:21:54.159595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:00.639 BaseBdev2 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.639 [2024-11-26 20:21:54.169139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.639 [2024-11-26 20:21:54.171248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.639 [2024-11-26 20:21:54.171450] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:00.639 [2024-11-26 20:21:54.171468] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:00.639 [2024-11-26 20:21:54.171762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:00.639 [2024-11-26 20:21:54.171918] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:00.639 [2024-11-26 20:21:54.171932] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:00.639 [2024-11-26 20:21:54.172102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.639 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.898 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.898 "name": "raid_bdev1", 00:09:00.898 "uuid": "1631d11e-98e5-4514-8695-75705401c4a3", 00:09:00.898 "strip_size_kb": 0, 00:09:00.898 "state": "online", 00:09:00.898 "raid_level": "raid1", 00:09:00.898 "superblock": true, 00:09:00.898 "num_base_bdevs": 2, 00:09:00.898 "num_base_bdevs_discovered": 2, 00:09:00.898 "num_base_bdevs_operational": 2, 00:09:00.898 "base_bdevs_list": [ 00:09:00.898 { 00:09:00.898 "name": "BaseBdev1", 00:09:00.898 "uuid": "2504262e-20f0-58b7-b73a-b710e118ba54", 00:09:00.898 "is_configured": true, 00:09:00.898 "data_offset": 2048, 00:09:00.898 "data_size": 63488 00:09:00.898 }, 00:09:00.898 { 00:09:00.898 "name": "BaseBdev2", 00:09:00.898 "uuid": "2b2ab6df-9932-5107-a0bf-253515fc1b5f", 00:09:00.898 "is_configured": true, 00:09:00.898 "data_offset": 2048, 00:09:00.898 "data_size": 63488 00:09:00.898 } 00:09:00.898 ] 00:09:00.898 }' 00:09:00.898 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.898 20:21:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.158 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:01.158 20:21:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:01.417 [2024-11-26 20:21:54.744661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.353 [2024-11-26 20:21:55.650181] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:02.353 [2024-11-26 20:21:55.650379] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.353 [2024-11-26 20:21:55.650637] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.353 "name": "raid_bdev1", 00:09:02.353 "uuid": "1631d11e-98e5-4514-8695-75705401c4a3", 00:09:02.353 "strip_size_kb": 0, 00:09:02.353 "state": "online", 00:09:02.353 "raid_level": "raid1", 00:09:02.353 "superblock": true, 00:09:02.353 "num_base_bdevs": 2, 00:09:02.353 "num_base_bdevs_discovered": 1, 00:09:02.353 "num_base_bdevs_operational": 1, 00:09:02.353 "base_bdevs_list": [ 00:09:02.353 { 00:09:02.353 "name": null, 00:09:02.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.353 "is_configured": false, 00:09:02.353 "data_offset": 0, 00:09:02.353 "data_size": 63488 00:09:02.353 }, 00:09:02.353 { 00:09:02.353 "name": "BaseBdev2", 00:09:02.353 "uuid": "2b2ab6df-9932-5107-a0bf-253515fc1b5f", 00:09:02.353 "is_configured": true, 00:09:02.353 "data_offset": 2048, 00:09:02.353 "data_size": 63488 00:09:02.353 } 00:09:02.353 ] 00:09:02.353 }' 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.353 20:21:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.612 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:02.612 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.612 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.612 [2024-11-26 20:21:56.080496] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.612 [2024-11-26 20:21:56.080648] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.612 [2024-11-26 20:21:56.083678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.612 [2024-11-26 20:21:56.083791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.612 [2024-11-26 20:21:56.083873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.612 [2024-11-26 20:21:56.083924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:02.612 { 00:09:02.612 "results": [ 00:09:02.612 { 00:09:02.612 "job": "raid_bdev1", 00:09:02.612 "core_mask": "0x1", 00:09:02.612 "workload": "randrw", 00:09:02.612 "percentage": 50, 00:09:02.612 "status": "finished", 00:09:02.612 "queue_depth": 1, 00:09:02.612 "io_size": 131072, 00:09:02.612 "runtime": 1.336592, 00:09:02.612 "iops": 14593.832672947317, 00:09:02.612 "mibps": 1824.2290841184147, 00:09:02.612 "io_failed": 0, 00:09:02.612 "io_timeout": 0, 00:09:02.612 "avg_latency_us": 65.52233459014067, 00:09:02.612 "min_latency_us": 24.370305676855896, 00:09:02.612 "max_latency_us": 2160.6847161572055 00:09:02.612 } 00:09:02.612 ], 00:09:02.612 "core_count": 1 00:09:02.612 } 00:09:02.612 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75301 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75301 ']' 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75301 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75301 00:09:02.613 killing process with pid 75301 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75301' 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75301 00:09:02.613 [2024-11-26 20:21:56.132004] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.613 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75301 00:09:02.872 [2024-11-26 20:21:56.161815] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.147qpzN3QL 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:03.131 00:09:03.131 real 0m3.455s 00:09:03.131 user 0m4.276s 00:09:03.131 sys 0m0.632s 00:09:03.131 ************************************ 00:09:03.131 END TEST raid_write_error_test 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.131 20:21:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.131 ************************************ 00:09:03.131 20:21:56 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:03.131 20:21:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:03.131 20:21:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:03.131 20:21:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:03.131 20:21:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.131 20:21:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.131 ************************************ 00:09:03.131 START TEST raid_state_function_test 00:09:03.131 ************************************ 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75433 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75433' 00:09:03.131 Process raid pid: 75433 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75433 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75433 ']' 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.131 20:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.390 [2024-11-26 20:21:56.747591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:03.390 [2024-11-26 20:21:56.747983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.650 [2024-11-26 20:21:56.941664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.650 [2024-11-26 20:21:57.026855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.650 [2024-11-26 20:21:57.102955] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.650 [2024-11-26 20:21:57.103080] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.220 [2024-11-26 20:21:57.601838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.220 [2024-11-26 20:21:57.602032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.220 [2024-11-26 20:21:57.602092] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.220 [2024-11-26 20:21:57.602125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.220 [2024-11-26 20:21:57.602149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.220 [2024-11-26 20:21:57.602188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.220 "name": "Existed_Raid", 00:09:04.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.220 "strip_size_kb": 64, 00:09:04.220 "state": "configuring", 00:09:04.220 "raid_level": "raid0", 00:09:04.220 "superblock": false, 00:09:04.220 "num_base_bdevs": 3, 00:09:04.220 "num_base_bdevs_discovered": 0, 00:09:04.220 "num_base_bdevs_operational": 3, 00:09:04.220 "base_bdevs_list": [ 00:09:04.220 { 00:09:04.220 "name": "BaseBdev1", 00:09:04.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.220 "is_configured": false, 00:09:04.220 "data_offset": 0, 00:09:04.220 "data_size": 0 00:09:04.220 }, 00:09:04.220 { 00:09:04.220 "name": "BaseBdev2", 00:09:04.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.220 "is_configured": false, 00:09:04.220 "data_offset": 0, 00:09:04.220 "data_size": 0 00:09:04.220 }, 00:09:04.220 { 00:09:04.220 "name": "BaseBdev3", 00:09:04.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.220 "is_configured": false, 00:09:04.220 "data_offset": 0, 00:09:04.220 "data_size": 0 00:09:04.220 } 00:09:04.220 ] 00:09:04.220 }' 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.220 20:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 [2024-11-26 20:21:58.064941] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.791 [2024-11-26 20:21:58.064999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 [2024-11-26 20:21:58.077007] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.791 [2024-11-26 20:21:58.077075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.791 [2024-11-26 20:21:58.077087] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.791 [2024-11-26 20:21:58.077098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.791 [2024-11-26 20:21:58.077107] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.791 [2024-11-26 20:21:58.077118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 [2024-11-26 20:21:58.100890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.791 BaseBdev1 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 [ 00:09:04.791 { 00:09:04.791 "name": "BaseBdev1", 00:09:04.791 "aliases": [ 00:09:04.791 "58bb372c-0960-4b78-8a49-580a24de870e" 00:09:04.791 ], 00:09:04.791 "product_name": "Malloc disk", 00:09:04.791 "block_size": 512, 00:09:04.791 "num_blocks": 65536, 00:09:04.791 "uuid": "58bb372c-0960-4b78-8a49-580a24de870e", 00:09:04.791 "assigned_rate_limits": { 00:09:04.791 "rw_ios_per_sec": 0, 00:09:04.791 "rw_mbytes_per_sec": 0, 00:09:04.791 "r_mbytes_per_sec": 0, 00:09:04.791 "w_mbytes_per_sec": 0 00:09:04.791 }, 00:09:04.791 "claimed": true, 00:09:04.791 "claim_type": "exclusive_write", 00:09:04.791 "zoned": false, 00:09:04.791 "supported_io_types": { 00:09:04.791 "read": true, 00:09:04.791 "write": true, 00:09:04.791 "unmap": true, 00:09:04.791 "flush": true, 00:09:04.791 "reset": true, 00:09:04.791 "nvme_admin": false, 00:09:04.791 "nvme_io": false, 00:09:04.791 "nvme_io_md": false, 00:09:04.791 "write_zeroes": true, 00:09:04.791 "zcopy": true, 00:09:04.791 "get_zone_info": false, 00:09:04.791 "zone_management": false, 00:09:04.791 "zone_append": false, 00:09:04.791 "compare": false, 00:09:04.791 "compare_and_write": false, 00:09:04.791 "abort": true, 00:09:04.791 "seek_hole": false, 00:09:04.791 "seek_data": false, 00:09:04.791 "copy": true, 00:09:04.791 "nvme_iov_md": false 00:09:04.791 }, 00:09:04.791 "memory_domains": [ 00:09:04.791 { 00:09:04.791 "dma_device_id": "system", 00:09:04.791 "dma_device_type": 1 00:09:04.791 }, 00:09:04.791 { 00:09:04.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.791 "dma_device_type": 2 00:09:04.791 } 00:09:04.791 ], 00:09:04.791 "driver_specific": {} 00:09:04.791 } 00:09:04.791 ] 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.791 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.791 "name": "Existed_Raid", 00:09:04.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.791 "strip_size_kb": 64, 00:09:04.791 "state": "configuring", 00:09:04.791 "raid_level": "raid0", 00:09:04.791 "superblock": false, 00:09:04.791 "num_base_bdevs": 3, 00:09:04.791 "num_base_bdevs_discovered": 1, 00:09:04.791 "num_base_bdevs_operational": 3, 00:09:04.791 "base_bdevs_list": [ 00:09:04.791 { 00:09:04.791 "name": "BaseBdev1", 00:09:04.791 "uuid": "58bb372c-0960-4b78-8a49-580a24de870e", 00:09:04.791 "is_configured": true, 00:09:04.791 "data_offset": 0, 00:09:04.791 "data_size": 65536 00:09:04.791 }, 00:09:04.791 { 00:09:04.791 "name": "BaseBdev2", 00:09:04.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.791 "is_configured": false, 00:09:04.791 "data_offset": 0, 00:09:04.791 "data_size": 0 00:09:04.791 }, 00:09:04.791 { 00:09:04.791 "name": "BaseBdev3", 00:09:04.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.791 "is_configured": false, 00:09:04.791 "data_offset": 0, 00:09:04.791 "data_size": 0 00:09:04.791 } 00:09:04.791 ] 00:09:04.792 }' 00:09:04.792 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.792 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.050 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.051 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.051 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.051 [2024-11-26 20:21:58.600167] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.310 [2024-11-26 20:21:58.600310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.310 [2024-11-26 20:21:58.612203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.310 [2024-11-26 20:21:58.614370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.310 [2024-11-26 20:21:58.614462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.310 [2024-11-26 20:21:58.614477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.310 [2024-11-26 20:21:58.614488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.310 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.311 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.311 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.311 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.311 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.311 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.311 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.311 "name": "Existed_Raid", 00:09:05.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.311 "strip_size_kb": 64, 00:09:05.311 "state": "configuring", 00:09:05.311 "raid_level": "raid0", 00:09:05.311 "superblock": false, 00:09:05.311 "num_base_bdevs": 3, 00:09:05.311 "num_base_bdevs_discovered": 1, 00:09:05.311 "num_base_bdevs_operational": 3, 00:09:05.311 "base_bdevs_list": [ 00:09:05.311 { 00:09:05.311 "name": "BaseBdev1", 00:09:05.311 "uuid": "58bb372c-0960-4b78-8a49-580a24de870e", 00:09:05.311 "is_configured": true, 00:09:05.311 "data_offset": 0, 00:09:05.311 "data_size": 65536 00:09:05.311 }, 00:09:05.311 { 00:09:05.311 "name": "BaseBdev2", 00:09:05.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.311 "is_configured": false, 00:09:05.311 "data_offset": 0, 00:09:05.311 "data_size": 0 00:09:05.311 }, 00:09:05.311 { 00:09:05.311 "name": "BaseBdev3", 00:09:05.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.311 "is_configured": false, 00:09:05.311 "data_offset": 0, 00:09:05.311 "data_size": 0 00:09:05.311 } 00:09:05.311 ] 00:09:05.311 }' 00:09:05.311 20:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.311 20:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.571 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.571 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.571 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.841 [2024-11-26 20:21:59.123803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.841 BaseBdev2 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.841 [ 00:09:05.841 { 00:09:05.841 "name": "BaseBdev2", 00:09:05.841 "aliases": [ 00:09:05.841 "28093012-b81d-4edf-b0be-4d3bdf483d2e" 00:09:05.841 ], 00:09:05.841 "product_name": "Malloc disk", 00:09:05.841 "block_size": 512, 00:09:05.841 "num_blocks": 65536, 00:09:05.841 "uuid": "28093012-b81d-4edf-b0be-4d3bdf483d2e", 00:09:05.841 "assigned_rate_limits": { 00:09:05.841 "rw_ios_per_sec": 0, 00:09:05.841 "rw_mbytes_per_sec": 0, 00:09:05.841 "r_mbytes_per_sec": 0, 00:09:05.841 "w_mbytes_per_sec": 0 00:09:05.841 }, 00:09:05.841 "claimed": true, 00:09:05.841 "claim_type": "exclusive_write", 00:09:05.841 "zoned": false, 00:09:05.841 "supported_io_types": { 00:09:05.841 "read": true, 00:09:05.841 "write": true, 00:09:05.841 "unmap": true, 00:09:05.841 "flush": true, 00:09:05.841 "reset": true, 00:09:05.841 "nvme_admin": false, 00:09:05.841 "nvme_io": false, 00:09:05.841 "nvme_io_md": false, 00:09:05.841 "write_zeroes": true, 00:09:05.841 "zcopy": true, 00:09:05.841 "get_zone_info": false, 00:09:05.841 "zone_management": false, 00:09:05.841 "zone_append": false, 00:09:05.841 "compare": false, 00:09:05.841 "compare_and_write": false, 00:09:05.841 "abort": true, 00:09:05.841 "seek_hole": false, 00:09:05.841 "seek_data": false, 00:09:05.841 "copy": true, 00:09:05.841 "nvme_iov_md": false 00:09:05.841 }, 00:09:05.841 "memory_domains": [ 00:09:05.841 { 00:09:05.841 "dma_device_id": "system", 00:09:05.841 "dma_device_type": 1 00:09:05.841 }, 00:09:05.841 { 00:09:05.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.841 "dma_device_type": 2 00:09:05.841 } 00:09:05.841 ], 00:09:05.841 "driver_specific": {} 00:09:05.841 } 00:09:05.841 ] 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.841 "name": "Existed_Raid", 00:09:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.841 "strip_size_kb": 64, 00:09:05.841 "state": "configuring", 00:09:05.841 "raid_level": "raid0", 00:09:05.841 "superblock": false, 00:09:05.841 "num_base_bdevs": 3, 00:09:05.841 "num_base_bdevs_discovered": 2, 00:09:05.841 "num_base_bdevs_operational": 3, 00:09:05.841 "base_bdevs_list": [ 00:09:05.841 { 00:09:05.841 "name": "BaseBdev1", 00:09:05.841 "uuid": "58bb372c-0960-4b78-8a49-580a24de870e", 00:09:05.841 "is_configured": true, 00:09:05.841 "data_offset": 0, 00:09:05.841 "data_size": 65536 00:09:05.841 }, 00:09:05.841 { 00:09:05.841 "name": "BaseBdev2", 00:09:05.841 "uuid": "28093012-b81d-4edf-b0be-4d3bdf483d2e", 00:09:05.841 "is_configured": true, 00:09:05.841 "data_offset": 0, 00:09:05.841 "data_size": 65536 00:09:05.841 }, 00:09:05.841 { 00:09:05.841 "name": "BaseBdev3", 00:09:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.841 "is_configured": false, 00:09:05.841 "data_offset": 0, 00:09:05.841 "data_size": 0 00:09:05.841 } 00:09:05.841 ] 00:09:05.841 }' 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.841 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.121 [2024-11-26 20:21:59.590392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.121 [2024-11-26 20:21:59.590530] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:06.121 [2024-11-26 20:21:59.590548] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:06.121 [2024-11-26 20:21:59.590955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:06.121 [2024-11-26 20:21:59.591138] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:06.121 [2024-11-26 20:21:59.591150] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:06.121 [2024-11-26 20:21:59.591392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.121 BaseBdev3 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.121 [ 00:09:06.121 { 00:09:06.121 "name": "BaseBdev3", 00:09:06.121 "aliases": [ 00:09:06.121 "190c4503-ae4c-4852-9d0b-d4a36570981e" 00:09:06.121 ], 00:09:06.121 "product_name": "Malloc disk", 00:09:06.121 "block_size": 512, 00:09:06.121 "num_blocks": 65536, 00:09:06.121 "uuid": "190c4503-ae4c-4852-9d0b-d4a36570981e", 00:09:06.121 "assigned_rate_limits": { 00:09:06.121 "rw_ios_per_sec": 0, 00:09:06.121 "rw_mbytes_per_sec": 0, 00:09:06.121 "r_mbytes_per_sec": 0, 00:09:06.121 "w_mbytes_per_sec": 0 00:09:06.121 }, 00:09:06.121 "claimed": true, 00:09:06.121 "claim_type": "exclusive_write", 00:09:06.121 "zoned": false, 00:09:06.121 "supported_io_types": { 00:09:06.121 "read": true, 00:09:06.121 "write": true, 00:09:06.121 "unmap": true, 00:09:06.121 "flush": true, 00:09:06.121 "reset": true, 00:09:06.121 "nvme_admin": false, 00:09:06.121 "nvme_io": false, 00:09:06.121 "nvme_io_md": false, 00:09:06.121 "write_zeroes": true, 00:09:06.121 "zcopy": true, 00:09:06.121 "get_zone_info": false, 00:09:06.121 "zone_management": false, 00:09:06.121 "zone_append": false, 00:09:06.121 "compare": false, 00:09:06.121 "compare_and_write": false, 00:09:06.121 "abort": true, 00:09:06.121 "seek_hole": false, 00:09:06.121 "seek_data": false, 00:09:06.121 "copy": true, 00:09:06.121 "nvme_iov_md": false 00:09:06.121 }, 00:09:06.121 "memory_domains": [ 00:09:06.121 { 00:09:06.121 "dma_device_id": "system", 00:09:06.121 "dma_device_type": 1 00:09:06.121 }, 00:09:06.121 { 00:09:06.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.121 "dma_device_type": 2 00:09:06.121 } 00:09:06.121 ], 00:09:06.121 "driver_specific": {} 00:09:06.121 } 00:09:06.121 ] 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.121 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.122 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.122 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.122 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.381 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.381 "name": "Existed_Raid", 00:09:06.381 "uuid": "ef7acbd9-c447-4fac-b786-257cafeb1d6c", 00:09:06.381 "strip_size_kb": 64, 00:09:06.381 "state": "online", 00:09:06.381 "raid_level": "raid0", 00:09:06.381 "superblock": false, 00:09:06.381 "num_base_bdevs": 3, 00:09:06.381 "num_base_bdevs_discovered": 3, 00:09:06.381 "num_base_bdevs_operational": 3, 00:09:06.381 "base_bdevs_list": [ 00:09:06.381 { 00:09:06.381 "name": "BaseBdev1", 00:09:06.381 "uuid": "58bb372c-0960-4b78-8a49-580a24de870e", 00:09:06.381 "is_configured": true, 00:09:06.381 "data_offset": 0, 00:09:06.381 "data_size": 65536 00:09:06.381 }, 00:09:06.381 { 00:09:06.381 "name": "BaseBdev2", 00:09:06.381 "uuid": "28093012-b81d-4edf-b0be-4d3bdf483d2e", 00:09:06.381 "is_configured": true, 00:09:06.381 "data_offset": 0, 00:09:06.381 "data_size": 65536 00:09:06.381 }, 00:09:06.381 { 00:09:06.381 "name": "BaseBdev3", 00:09:06.381 "uuid": "190c4503-ae4c-4852-9d0b-d4a36570981e", 00:09:06.381 "is_configured": true, 00:09:06.381 "data_offset": 0, 00:09:06.381 "data_size": 65536 00:09:06.381 } 00:09:06.381 ] 00:09:06.381 }' 00:09:06.381 20:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.381 20:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.641 [2024-11-26 20:22:00.118040] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.641 "name": "Existed_Raid", 00:09:06.641 "aliases": [ 00:09:06.641 "ef7acbd9-c447-4fac-b786-257cafeb1d6c" 00:09:06.641 ], 00:09:06.641 "product_name": "Raid Volume", 00:09:06.641 "block_size": 512, 00:09:06.641 "num_blocks": 196608, 00:09:06.641 "uuid": "ef7acbd9-c447-4fac-b786-257cafeb1d6c", 00:09:06.641 "assigned_rate_limits": { 00:09:06.641 "rw_ios_per_sec": 0, 00:09:06.641 "rw_mbytes_per_sec": 0, 00:09:06.641 "r_mbytes_per_sec": 0, 00:09:06.641 "w_mbytes_per_sec": 0 00:09:06.641 }, 00:09:06.641 "claimed": false, 00:09:06.641 "zoned": false, 00:09:06.641 "supported_io_types": { 00:09:06.641 "read": true, 00:09:06.641 "write": true, 00:09:06.641 "unmap": true, 00:09:06.641 "flush": true, 00:09:06.641 "reset": true, 00:09:06.641 "nvme_admin": false, 00:09:06.641 "nvme_io": false, 00:09:06.641 "nvme_io_md": false, 00:09:06.641 "write_zeroes": true, 00:09:06.641 "zcopy": false, 00:09:06.641 "get_zone_info": false, 00:09:06.641 "zone_management": false, 00:09:06.641 "zone_append": false, 00:09:06.641 "compare": false, 00:09:06.641 "compare_and_write": false, 00:09:06.641 "abort": false, 00:09:06.641 "seek_hole": false, 00:09:06.641 "seek_data": false, 00:09:06.641 "copy": false, 00:09:06.641 "nvme_iov_md": false 00:09:06.641 }, 00:09:06.641 "memory_domains": [ 00:09:06.641 { 00:09:06.641 "dma_device_id": "system", 00:09:06.641 "dma_device_type": 1 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.641 "dma_device_type": 2 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "dma_device_id": "system", 00:09:06.641 "dma_device_type": 1 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.641 "dma_device_type": 2 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "dma_device_id": "system", 00:09:06.641 "dma_device_type": 1 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.641 "dma_device_type": 2 00:09:06.641 } 00:09:06.641 ], 00:09:06.641 "driver_specific": { 00:09:06.641 "raid": { 00:09:06.641 "uuid": "ef7acbd9-c447-4fac-b786-257cafeb1d6c", 00:09:06.641 "strip_size_kb": 64, 00:09:06.641 "state": "online", 00:09:06.641 "raid_level": "raid0", 00:09:06.641 "superblock": false, 00:09:06.641 "num_base_bdevs": 3, 00:09:06.641 "num_base_bdevs_discovered": 3, 00:09:06.641 "num_base_bdevs_operational": 3, 00:09:06.641 "base_bdevs_list": [ 00:09:06.641 { 00:09:06.641 "name": "BaseBdev1", 00:09:06.641 "uuid": "58bb372c-0960-4b78-8a49-580a24de870e", 00:09:06.641 "is_configured": true, 00:09:06.641 "data_offset": 0, 00:09:06.641 "data_size": 65536 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "name": "BaseBdev2", 00:09:06.641 "uuid": "28093012-b81d-4edf-b0be-4d3bdf483d2e", 00:09:06.641 "is_configured": true, 00:09:06.641 "data_offset": 0, 00:09:06.641 "data_size": 65536 00:09:06.641 }, 00:09:06.641 { 00:09:06.641 "name": "BaseBdev3", 00:09:06.641 "uuid": "190c4503-ae4c-4852-9d0b-d4a36570981e", 00:09:06.641 "is_configured": true, 00:09:06.641 "data_offset": 0, 00:09:06.641 "data_size": 65536 00:09:06.641 } 00:09:06.641 ] 00:09:06.641 } 00:09:06.641 } 00:09:06.641 }' 00:09:06.641 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:06.901 BaseBdev2 00:09:06.901 BaseBdev3' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.901 [2024-11-26 20:22:00.421277] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.901 [2024-11-26 20:22:00.421314] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.901 [2024-11-26 20:22:00.421373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.901 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.160 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.160 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.160 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.160 "name": "Existed_Raid", 00:09:07.160 "uuid": "ef7acbd9-c447-4fac-b786-257cafeb1d6c", 00:09:07.160 "strip_size_kb": 64, 00:09:07.160 "state": "offline", 00:09:07.160 "raid_level": "raid0", 00:09:07.161 "superblock": false, 00:09:07.161 "num_base_bdevs": 3, 00:09:07.161 "num_base_bdevs_discovered": 2, 00:09:07.161 "num_base_bdevs_operational": 2, 00:09:07.161 "base_bdevs_list": [ 00:09:07.161 { 00:09:07.161 "name": null, 00:09:07.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.161 "is_configured": false, 00:09:07.161 "data_offset": 0, 00:09:07.161 "data_size": 65536 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "name": "BaseBdev2", 00:09:07.161 "uuid": "28093012-b81d-4edf-b0be-4d3bdf483d2e", 00:09:07.161 "is_configured": true, 00:09:07.161 "data_offset": 0, 00:09:07.161 "data_size": 65536 00:09:07.161 }, 00:09:07.161 { 00:09:07.161 "name": "BaseBdev3", 00:09:07.161 "uuid": "190c4503-ae4c-4852-9d0b-d4a36570981e", 00:09:07.161 "is_configured": true, 00:09:07.161 "data_offset": 0, 00:09:07.161 "data_size": 65536 00:09:07.161 } 00:09:07.161 ] 00:09:07.161 }' 00:09:07.161 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.161 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.420 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:07.420 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.420 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.421 [2024-11-26 20:22:00.908483] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.421 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.680 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.680 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.680 20:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:07.680 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.680 20:22:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.680 [2024-11-26 20:22:00.993579] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:07.680 [2024-11-26 20:22:00.993691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:07.680 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 BaseBdev2 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 [ 00:09:07.681 { 00:09:07.681 "name": "BaseBdev2", 00:09:07.681 "aliases": [ 00:09:07.681 "09f3b925-aecf-4b47-8025-5f57291a1f08" 00:09:07.681 ], 00:09:07.681 "product_name": "Malloc disk", 00:09:07.681 "block_size": 512, 00:09:07.681 "num_blocks": 65536, 00:09:07.681 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:07.681 "assigned_rate_limits": { 00:09:07.681 "rw_ios_per_sec": 0, 00:09:07.681 "rw_mbytes_per_sec": 0, 00:09:07.681 "r_mbytes_per_sec": 0, 00:09:07.681 "w_mbytes_per_sec": 0 00:09:07.681 }, 00:09:07.681 "claimed": false, 00:09:07.681 "zoned": false, 00:09:07.681 "supported_io_types": { 00:09:07.681 "read": true, 00:09:07.681 "write": true, 00:09:07.681 "unmap": true, 00:09:07.681 "flush": true, 00:09:07.681 "reset": true, 00:09:07.681 "nvme_admin": false, 00:09:07.681 "nvme_io": false, 00:09:07.681 "nvme_io_md": false, 00:09:07.681 "write_zeroes": true, 00:09:07.681 "zcopy": true, 00:09:07.681 "get_zone_info": false, 00:09:07.681 "zone_management": false, 00:09:07.681 "zone_append": false, 00:09:07.681 "compare": false, 00:09:07.681 "compare_and_write": false, 00:09:07.681 "abort": true, 00:09:07.681 "seek_hole": false, 00:09:07.681 "seek_data": false, 00:09:07.681 "copy": true, 00:09:07.681 "nvme_iov_md": false 00:09:07.681 }, 00:09:07.681 "memory_domains": [ 00:09:07.681 { 00:09:07.681 "dma_device_id": "system", 00:09:07.681 "dma_device_type": 1 00:09:07.681 }, 00:09:07.681 { 00:09:07.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.681 "dma_device_type": 2 00:09:07.681 } 00:09:07.681 ], 00:09:07.681 "driver_specific": {} 00:09:07.681 } 00:09:07.681 ] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 BaseBdev3 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 [ 00:09:07.681 { 00:09:07.681 "name": "BaseBdev3", 00:09:07.681 "aliases": [ 00:09:07.681 "0c1fe06a-7a6f-463e-9f92-b6726c9955fd" 00:09:07.681 ], 00:09:07.681 "product_name": "Malloc disk", 00:09:07.681 "block_size": 512, 00:09:07.681 "num_blocks": 65536, 00:09:07.681 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:07.681 "assigned_rate_limits": { 00:09:07.681 "rw_ios_per_sec": 0, 00:09:07.681 "rw_mbytes_per_sec": 0, 00:09:07.681 "r_mbytes_per_sec": 0, 00:09:07.681 "w_mbytes_per_sec": 0 00:09:07.681 }, 00:09:07.681 "claimed": false, 00:09:07.681 "zoned": false, 00:09:07.681 "supported_io_types": { 00:09:07.681 "read": true, 00:09:07.681 "write": true, 00:09:07.681 "unmap": true, 00:09:07.681 "flush": true, 00:09:07.681 "reset": true, 00:09:07.681 "nvme_admin": false, 00:09:07.681 "nvme_io": false, 00:09:07.681 "nvme_io_md": false, 00:09:07.681 "write_zeroes": true, 00:09:07.681 "zcopy": true, 00:09:07.681 "get_zone_info": false, 00:09:07.681 "zone_management": false, 00:09:07.681 "zone_append": false, 00:09:07.681 "compare": false, 00:09:07.681 "compare_and_write": false, 00:09:07.681 "abort": true, 00:09:07.681 "seek_hole": false, 00:09:07.681 "seek_data": false, 00:09:07.681 "copy": true, 00:09:07.681 "nvme_iov_md": false 00:09:07.681 }, 00:09:07.681 "memory_domains": [ 00:09:07.681 { 00:09:07.681 "dma_device_id": "system", 00:09:07.681 "dma_device_type": 1 00:09:07.681 }, 00:09:07.681 { 00:09:07.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.681 "dma_device_type": 2 00:09:07.681 } 00:09:07.681 ], 00:09:07.681 "driver_specific": {} 00:09:07.681 } 00:09:07.681 ] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 [2024-11-26 20:22:01.178665] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.681 [2024-11-26 20:22:01.178804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.681 [2024-11-26 20:22:01.178857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.681 [2024-11-26 20:22:01.180873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.940 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.940 "name": "Existed_Raid", 00:09:07.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.940 "strip_size_kb": 64, 00:09:07.940 "state": "configuring", 00:09:07.940 "raid_level": "raid0", 00:09:07.940 "superblock": false, 00:09:07.940 "num_base_bdevs": 3, 00:09:07.941 "num_base_bdevs_discovered": 2, 00:09:07.941 "num_base_bdevs_operational": 3, 00:09:07.941 "base_bdevs_list": [ 00:09:07.941 { 00:09:07.941 "name": "BaseBdev1", 00:09:07.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.941 "is_configured": false, 00:09:07.941 "data_offset": 0, 00:09:07.941 "data_size": 0 00:09:07.941 }, 00:09:07.941 { 00:09:07.941 "name": "BaseBdev2", 00:09:07.941 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:07.941 "is_configured": true, 00:09:07.941 "data_offset": 0, 00:09:07.941 "data_size": 65536 00:09:07.941 }, 00:09:07.941 { 00:09:07.941 "name": "BaseBdev3", 00:09:07.941 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:07.941 "is_configured": true, 00:09:07.941 "data_offset": 0, 00:09:07.941 "data_size": 65536 00:09:07.941 } 00:09:07.941 ] 00:09:07.941 }' 00:09:07.941 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.941 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.200 [2024-11-26 20:22:01.653825] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.200 "name": "Existed_Raid", 00:09:08.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.200 "strip_size_kb": 64, 00:09:08.200 "state": "configuring", 00:09:08.200 "raid_level": "raid0", 00:09:08.200 "superblock": false, 00:09:08.200 "num_base_bdevs": 3, 00:09:08.200 "num_base_bdevs_discovered": 1, 00:09:08.200 "num_base_bdevs_operational": 3, 00:09:08.200 "base_bdevs_list": [ 00:09:08.200 { 00:09:08.200 "name": "BaseBdev1", 00:09:08.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.200 "is_configured": false, 00:09:08.200 "data_offset": 0, 00:09:08.200 "data_size": 0 00:09:08.200 }, 00:09:08.200 { 00:09:08.200 "name": null, 00:09:08.200 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:08.200 "is_configured": false, 00:09:08.200 "data_offset": 0, 00:09:08.200 "data_size": 65536 00:09:08.200 }, 00:09:08.200 { 00:09:08.200 "name": "BaseBdev3", 00:09:08.200 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:08.200 "is_configured": true, 00:09:08.200 "data_offset": 0, 00:09:08.200 "data_size": 65536 00:09:08.200 } 00:09:08.200 ] 00:09:08.200 }' 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.200 20:22:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.767 [2024-11-26 20:22:02.166997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.767 BaseBdev1 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.767 [ 00:09:08.767 { 00:09:08.767 "name": "BaseBdev1", 00:09:08.767 "aliases": [ 00:09:08.767 "bbdd294a-0b8a-4287-943c-87dbaac561be" 00:09:08.767 ], 00:09:08.767 "product_name": "Malloc disk", 00:09:08.767 "block_size": 512, 00:09:08.767 "num_blocks": 65536, 00:09:08.767 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:08.767 "assigned_rate_limits": { 00:09:08.767 "rw_ios_per_sec": 0, 00:09:08.767 "rw_mbytes_per_sec": 0, 00:09:08.767 "r_mbytes_per_sec": 0, 00:09:08.767 "w_mbytes_per_sec": 0 00:09:08.767 }, 00:09:08.767 "claimed": true, 00:09:08.767 "claim_type": "exclusive_write", 00:09:08.767 "zoned": false, 00:09:08.767 "supported_io_types": { 00:09:08.767 "read": true, 00:09:08.767 "write": true, 00:09:08.767 "unmap": true, 00:09:08.767 "flush": true, 00:09:08.767 "reset": true, 00:09:08.767 "nvme_admin": false, 00:09:08.767 "nvme_io": false, 00:09:08.767 "nvme_io_md": false, 00:09:08.767 "write_zeroes": true, 00:09:08.767 "zcopy": true, 00:09:08.767 "get_zone_info": false, 00:09:08.767 "zone_management": false, 00:09:08.767 "zone_append": false, 00:09:08.767 "compare": false, 00:09:08.767 "compare_and_write": false, 00:09:08.767 "abort": true, 00:09:08.767 "seek_hole": false, 00:09:08.767 "seek_data": false, 00:09:08.767 "copy": true, 00:09:08.767 "nvme_iov_md": false 00:09:08.767 }, 00:09:08.767 "memory_domains": [ 00:09:08.767 { 00:09:08.767 "dma_device_id": "system", 00:09:08.767 "dma_device_type": 1 00:09:08.767 }, 00:09:08.767 { 00:09:08.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.767 "dma_device_type": 2 00:09:08.767 } 00:09:08.767 ], 00:09:08.767 "driver_specific": {} 00:09:08.767 } 00:09:08.767 ] 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.767 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.767 "name": "Existed_Raid", 00:09:08.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.767 "strip_size_kb": 64, 00:09:08.767 "state": "configuring", 00:09:08.767 "raid_level": "raid0", 00:09:08.767 "superblock": false, 00:09:08.767 "num_base_bdevs": 3, 00:09:08.767 "num_base_bdevs_discovered": 2, 00:09:08.767 "num_base_bdevs_operational": 3, 00:09:08.767 "base_bdevs_list": [ 00:09:08.767 { 00:09:08.767 "name": "BaseBdev1", 00:09:08.767 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:08.767 "is_configured": true, 00:09:08.767 "data_offset": 0, 00:09:08.768 "data_size": 65536 00:09:08.768 }, 00:09:08.768 { 00:09:08.768 "name": null, 00:09:08.768 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:08.768 "is_configured": false, 00:09:08.768 "data_offset": 0, 00:09:08.768 "data_size": 65536 00:09:08.768 }, 00:09:08.768 { 00:09:08.768 "name": "BaseBdev3", 00:09:08.768 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:08.768 "is_configured": true, 00:09:08.768 "data_offset": 0, 00:09:08.768 "data_size": 65536 00:09:08.768 } 00:09:08.768 ] 00:09:08.768 }' 00:09:08.768 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.768 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.337 [2024-11-26 20:22:02.694202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.337 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.338 "name": "Existed_Raid", 00:09:09.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.338 "strip_size_kb": 64, 00:09:09.338 "state": "configuring", 00:09:09.338 "raid_level": "raid0", 00:09:09.338 "superblock": false, 00:09:09.338 "num_base_bdevs": 3, 00:09:09.338 "num_base_bdevs_discovered": 1, 00:09:09.338 "num_base_bdevs_operational": 3, 00:09:09.338 "base_bdevs_list": [ 00:09:09.338 { 00:09:09.338 "name": "BaseBdev1", 00:09:09.338 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:09.338 "is_configured": true, 00:09:09.338 "data_offset": 0, 00:09:09.338 "data_size": 65536 00:09:09.338 }, 00:09:09.338 { 00:09:09.338 "name": null, 00:09:09.338 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:09.338 "is_configured": false, 00:09:09.338 "data_offset": 0, 00:09:09.338 "data_size": 65536 00:09:09.338 }, 00:09:09.338 { 00:09:09.338 "name": null, 00:09:09.338 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:09.338 "is_configured": false, 00:09:09.338 "data_offset": 0, 00:09:09.338 "data_size": 65536 00:09:09.338 } 00:09:09.338 ] 00:09:09.338 }' 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.338 20:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.636 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.636 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.636 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.636 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.636 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.909 [2024-11-26 20:22:03.181508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.909 "name": "Existed_Raid", 00:09:09.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.909 "strip_size_kb": 64, 00:09:09.909 "state": "configuring", 00:09:09.909 "raid_level": "raid0", 00:09:09.909 "superblock": false, 00:09:09.909 "num_base_bdevs": 3, 00:09:09.909 "num_base_bdevs_discovered": 2, 00:09:09.909 "num_base_bdevs_operational": 3, 00:09:09.909 "base_bdevs_list": [ 00:09:09.909 { 00:09:09.909 "name": "BaseBdev1", 00:09:09.909 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:09.909 "is_configured": true, 00:09:09.909 "data_offset": 0, 00:09:09.909 "data_size": 65536 00:09:09.909 }, 00:09:09.909 { 00:09:09.909 "name": null, 00:09:09.909 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:09.909 "is_configured": false, 00:09:09.909 "data_offset": 0, 00:09:09.909 "data_size": 65536 00:09:09.909 }, 00:09:09.909 { 00:09:09.909 "name": "BaseBdev3", 00:09:09.909 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:09.909 "is_configured": true, 00:09:09.909 "data_offset": 0, 00:09:09.909 "data_size": 65536 00:09:09.909 } 00:09:09.909 ] 00:09:09.909 }' 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.909 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.169 [2024-11-26 20:22:03.692651] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.169 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.429 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.429 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.429 "name": "Existed_Raid", 00:09:10.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.429 "strip_size_kb": 64, 00:09:10.429 "state": "configuring", 00:09:10.429 "raid_level": "raid0", 00:09:10.429 "superblock": false, 00:09:10.429 "num_base_bdevs": 3, 00:09:10.429 "num_base_bdevs_discovered": 1, 00:09:10.429 "num_base_bdevs_operational": 3, 00:09:10.429 "base_bdevs_list": [ 00:09:10.429 { 00:09:10.429 "name": null, 00:09:10.429 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:10.429 "is_configured": false, 00:09:10.429 "data_offset": 0, 00:09:10.429 "data_size": 65536 00:09:10.429 }, 00:09:10.429 { 00:09:10.429 "name": null, 00:09:10.429 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:10.429 "is_configured": false, 00:09:10.429 "data_offset": 0, 00:09:10.429 "data_size": 65536 00:09:10.429 }, 00:09:10.429 { 00:09:10.429 "name": "BaseBdev3", 00:09:10.429 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:10.429 "is_configured": true, 00:09:10.429 "data_offset": 0, 00:09:10.429 "data_size": 65536 00:09:10.429 } 00:09:10.429 ] 00:09:10.429 }' 00:09:10.429 20:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.429 20:22:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.689 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.689 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.689 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.689 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.689 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 [2024-11-26 20:22:04.254341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.950 "name": "Existed_Raid", 00:09:10.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.950 "strip_size_kb": 64, 00:09:10.950 "state": "configuring", 00:09:10.950 "raid_level": "raid0", 00:09:10.950 "superblock": false, 00:09:10.950 "num_base_bdevs": 3, 00:09:10.950 "num_base_bdevs_discovered": 2, 00:09:10.950 "num_base_bdevs_operational": 3, 00:09:10.950 "base_bdevs_list": [ 00:09:10.950 { 00:09:10.950 "name": null, 00:09:10.950 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:10.950 "is_configured": false, 00:09:10.950 "data_offset": 0, 00:09:10.950 "data_size": 65536 00:09:10.950 }, 00:09:10.950 { 00:09:10.950 "name": "BaseBdev2", 00:09:10.950 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:10.950 "is_configured": true, 00:09:10.950 "data_offset": 0, 00:09:10.950 "data_size": 65536 00:09:10.950 }, 00:09:10.950 { 00:09:10.950 "name": "BaseBdev3", 00:09:10.950 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:10.950 "is_configured": true, 00:09:10.950 "data_offset": 0, 00:09:10.950 "data_size": 65536 00:09:10.950 } 00:09:10.950 ] 00:09:10.950 }' 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.950 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.210 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.210 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.210 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.210 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.210 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bbdd294a-0b8a-4287-943c-87dbaac561be 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.469 [2024-11-26 20:22:04.831110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:11.469 [2024-11-26 20:22:04.831158] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:11.469 [2024-11-26 20:22:04.831169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:11.469 [2024-11-26 20:22:04.831449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:11.469 [2024-11-26 20:22:04.831577] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:11.469 [2024-11-26 20:22:04.831587] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:11.469 [2024-11-26 20:22:04.831838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.469 NewBaseBdev 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.469 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.469 [ 00:09:11.469 { 00:09:11.469 "name": "NewBaseBdev", 00:09:11.469 "aliases": [ 00:09:11.469 "bbdd294a-0b8a-4287-943c-87dbaac561be" 00:09:11.469 ], 00:09:11.469 "product_name": "Malloc disk", 00:09:11.469 "block_size": 512, 00:09:11.469 "num_blocks": 65536, 00:09:11.469 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:11.469 "assigned_rate_limits": { 00:09:11.469 "rw_ios_per_sec": 0, 00:09:11.469 "rw_mbytes_per_sec": 0, 00:09:11.469 "r_mbytes_per_sec": 0, 00:09:11.469 "w_mbytes_per_sec": 0 00:09:11.469 }, 00:09:11.469 "claimed": true, 00:09:11.469 "claim_type": "exclusive_write", 00:09:11.469 "zoned": false, 00:09:11.469 "supported_io_types": { 00:09:11.469 "read": true, 00:09:11.469 "write": true, 00:09:11.469 "unmap": true, 00:09:11.469 "flush": true, 00:09:11.469 "reset": true, 00:09:11.469 "nvme_admin": false, 00:09:11.469 "nvme_io": false, 00:09:11.469 "nvme_io_md": false, 00:09:11.469 "write_zeroes": true, 00:09:11.469 "zcopy": true, 00:09:11.469 "get_zone_info": false, 00:09:11.469 "zone_management": false, 00:09:11.469 "zone_append": false, 00:09:11.469 "compare": false, 00:09:11.469 "compare_and_write": false, 00:09:11.469 "abort": true, 00:09:11.469 "seek_hole": false, 00:09:11.469 "seek_data": false, 00:09:11.469 "copy": true, 00:09:11.469 "nvme_iov_md": false 00:09:11.469 }, 00:09:11.470 "memory_domains": [ 00:09:11.470 { 00:09:11.470 "dma_device_id": "system", 00:09:11.470 "dma_device_type": 1 00:09:11.470 }, 00:09:11.470 { 00:09:11.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.470 "dma_device_type": 2 00:09:11.470 } 00:09:11.470 ], 00:09:11.470 "driver_specific": {} 00:09:11.470 } 00:09:11.470 ] 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.470 "name": "Existed_Raid", 00:09:11.470 "uuid": "ab067ac8-3052-4a83-abaf-a5b639622a90", 00:09:11.470 "strip_size_kb": 64, 00:09:11.470 "state": "online", 00:09:11.470 "raid_level": "raid0", 00:09:11.470 "superblock": false, 00:09:11.470 "num_base_bdevs": 3, 00:09:11.470 "num_base_bdevs_discovered": 3, 00:09:11.470 "num_base_bdevs_operational": 3, 00:09:11.470 "base_bdevs_list": [ 00:09:11.470 { 00:09:11.470 "name": "NewBaseBdev", 00:09:11.470 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:11.470 "is_configured": true, 00:09:11.470 "data_offset": 0, 00:09:11.470 "data_size": 65536 00:09:11.470 }, 00:09:11.470 { 00:09:11.470 "name": "BaseBdev2", 00:09:11.470 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:11.470 "is_configured": true, 00:09:11.470 "data_offset": 0, 00:09:11.470 "data_size": 65536 00:09:11.470 }, 00:09:11.470 { 00:09:11.470 "name": "BaseBdev3", 00:09:11.470 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:11.470 "is_configured": true, 00:09:11.470 "data_offset": 0, 00:09:11.470 "data_size": 65536 00:09:11.470 } 00:09:11.470 ] 00:09:11.470 }' 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.470 20:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.036 [2024-11-26 20:22:05.290749] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.036 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.036 "name": "Existed_Raid", 00:09:12.036 "aliases": [ 00:09:12.036 "ab067ac8-3052-4a83-abaf-a5b639622a90" 00:09:12.036 ], 00:09:12.036 "product_name": "Raid Volume", 00:09:12.036 "block_size": 512, 00:09:12.036 "num_blocks": 196608, 00:09:12.036 "uuid": "ab067ac8-3052-4a83-abaf-a5b639622a90", 00:09:12.036 "assigned_rate_limits": { 00:09:12.036 "rw_ios_per_sec": 0, 00:09:12.036 "rw_mbytes_per_sec": 0, 00:09:12.036 "r_mbytes_per_sec": 0, 00:09:12.036 "w_mbytes_per_sec": 0 00:09:12.036 }, 00:09:12.036 "claimed": false, 00:09:12.036 "zoned": false, 00:09:12.036 "supported_io_types": { 00:09:12.036 "read": true, 00:09:12.036 "write": true, 00:09:12.036 "unmap": true, 00:09:12.036 "flush": true, 00:09:12.036 "reset": true, 00:09:12.036 "nvme_admin": false, 00:09:12.036 "nvme_io": false, 00:09:12.036 "nvme_io_md": false, 00:09:12.036 "write_zeroes": true, 00:09:12.036 "zcopy": false, 00:09:12.036 "get_zone_info": false, 00:09:12.036 "zone_management": false, 00:09:12.036 "zone_append": false, 00:09:12.036 "compare": false, 00:09:12.036 "compare_and_write": false, 00:09:12.036 "abort": false, 00:09:12.036 "seek_hole": false, 00:09:12.037 "seek_data": false, 00:09:12.037 "copy": false, 00:09:12.037 "nvme_iov_md": false 00:09:12.037 }, 00:09:12.037 "memory_domains": [ 00:09:12.037 { 00:09:12.037 "dma_device_id": "system", 00:09:12.037 "dma_device_type": 1 00:09:12.037 }, 00:09:12.037 { 00:09:12.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.037 "dma_device_type": 2 00:09:12.037 }, 00:09:12.037 { 00:09:12.037 "dma_device_id": "system", 00:09:12.037 "dma_device_type": 1 00:09:12.037 }, 00:09:12.037 { 00:09:12.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.037 "dma_device_type": 2 00:09:12.037 }, 00:09:12.037 { 00:09:12.037 "dma_device_id": "system", 00:09:12.037 "dma_device_type": 1 00:09:12.037 }, 00:09:12.037 { 00:09:12.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.037 "dma_device_type": 2 00:09:12.037 } 00:09:12.037 ], 00:09:12.037 "driver_specific": { 00:09:12.037 "raid": { 00:09:12.037 "uuid": "ab067ac8-3052-4a83-abaf-a5b639622a90", 00:09:12.037 "strip_size_kb": 64, 00:09:12.037 "state": "online", 00:09:12.037 "raid_level": "raid0", 00:09:12.037 "superblock": false, 00:09:12.037 "num_base_bdevs": 3, 00:09:12.037 "num_base_bdevs_discovered": 3, 00:09:12.037 "num_base_bdevs_operational": 3, 00:09:12.037 "base_bdevs_list": [ 00:09:12.037 { 00:09:12.037 "name": "NewBaseBdev", 00:09:12.037 "uuid": "bbdd294a-0b8a-4287-943c-87dbaac561be", 00:09:12.037 "is_configured": true, 00:09:12.037 "data_offset": 0, 00:09:12.037 "data_size": 65536 00:09:12.037 }, 00:09:12.037 { 00:09:12.037 "name": "BaseBdev2", 00:09:12.037 "uuid": "09f3b925-aecf-4b47-8025-5f57291a1f08", 00:09:12.037 "is_configured": true, 00:09:12.037 "data_offset": 0, 00:09:12.037 "data_size": 65536 00:09:12.037 }, 00:09:12.037 { 00:09:12.037 "name": "BaseBdev3", 00:09:12.037 "uuid": "0c1fe06a-7a6f-463e-9f92-b6726c9955fd", 00:09:12.037 "is_configured": true, 00:09:12.037 "data_offset": 0, 00:09:12.037 "data_size": 65536 00:09:12.037 } 00:09:12.037 ] 00:09:12.037 } 00:09:12.037 } 00:09:12.037 }' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:12.037 BaseBdev2 00:09:12.037 BaseBdev3' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.037 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.037 [2024-11-26 20:22:05.581906] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.037 [2024-11-26 20:22:05.581940] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.037 [2024-11-26 20:22:05.582036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.037 [2024-11-26 20:22:05.582091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.037 [2024-11-26 20:22:05.582104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:12.295 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.295 20:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75433 00:09:12.295 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75433 ']' 00:09:12.295 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75433 00:09:12.295 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:12.295 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.295 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75433 00:09:12.296 killing process with pid 75433 00:09:12.296 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.296 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.296 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75433' 00:09:12.296 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75433 00:09:12.296 [2024-11-26 20:22:05.629710] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.296 20:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75433 00:09:12.296 [2024-11-26 20:22:05.674179] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.604 20:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.604 00:09:12.604 real 0m9.416s 00:09:12.604 user 0m15.765s 00:09:12.604 sys 0m2.103s 00:09:12.604 20:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.604 ************************************ 00:09:12.604 END TEST raid_state_function_test 00:09:12.604 ************************************ 00:09:12.604 20:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.604 20:22:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:12.604 20:22:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.604 20:22:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.604 20:22:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.604 ************************************ 00:09:12.604 START TEST raid_state_function_test_sb 00:09:12.604 ************************************ 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:12.605 Process raid pid: 76044 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=76044 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76044' 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 76044 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 76044 ']' 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.605 20:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.864 [2024-11-26 20:22:06.205132] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:12.864 [2024-11-26 20:22:06.205914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.864 [2024-11-26 20:22:06.370729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.123 [2024-11-26 20:22:06.451519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.123 [2024-11-26 20:22:06.524504] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.123 [2024-11-26 20:22:06.524647] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.691 [2024-11-26 20:22:07.076541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.691 [2024-11-26 20:22:07.076678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.691 [2024-11-26 20:22:07.076742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.691 [2024-11-26 20:22:07.076782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.691 [2024-11-26 20:22:07.076813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.691 [2024-11-26 20:22:07.076854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.691 "name": "Existed_Raid", 00:09:13.691 "uuid": "fedc3d20-6b36-45fa-a733-bec7e8c16c86", 00:09:13.691 "strip_size_kb": 64, 00:09:13.691 "state": "configuring", 00:09:13.691 "raid_level": "raid0", 00:09:13.691 "superblock": true, 00:09:13.691 "num_base_bdevs": 3, 00:09:13.691 "num_base_bdevs_discovered": 0, 00:09:13.691 "num_base_bdevs_operational": 3, 00:09:13.691 "base_bdevs_list": [ 00:09:13.691 { 00:09:13.691 "name": "BaseBdev1", 00:09:13.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.691 "is_configured": false, 00:09:13.691 "data_offset": 0, 00:09:13.691 "data_size": 0 00:09:13.691 }, 00:09:13.691 { 00:09:13.691 "name": "BaseBdev2", 00:09:13.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.691 "is_configured": false, 00:09:13.691 "data_offset": 0, 00:09:13.691 "data_size": 0 00:09:13.691 }, 00:09:13.691 { 00:09:13.691 "name": "BaseBdev3", 00:09:13.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.691 "is_configured": false, 00:09:13.691 "data_offset": 0, 00:09:13.691 "data_size": 0 00:09:13.691 } 00:09:13.691 ] 00:09:13.691 }' 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.691 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.263 [2024-11-26 20:22:07.547695] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.263 [2024-11-26 20:22:07.547804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.263 [2024-11-26 20:22:07.559773] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.263 [2024-11-26 20:22:07.559885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.263 [2024-11-26 20:22:07.559914] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.263 [2024-11-26 20:22:07.559938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.263 [2024-11-26 20:22:07.559956] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.263 [2024-11-26 20:22:07.559977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.263 [2024-11-26 20:22:07.581222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.263 BaseBdev1 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.263 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.263 [ 00:09:14.263 { 00:09:14.263 "name": "BaseBdev1", 00:09:14.263 "aliases": [ 00:09:14.263 "a64f0fe0-ad20-46ad-906a-33639d6667a3" 00:09:14.263 ], 00:09:14.263 "product_name": "Malloc disk", 00:09:14.263 "block_size": 512, 00:09:14.263 "num_blocks": 65536, 00:09:14.263 "uuid": "a64f0fe0-ad20-46ad-906a-33639d6667a3", 00:09:14.263 "assigned_rate_limits": { 00:09:14.263 "rw_ios_per_sec": 0, 00:09:14.263 "rw_mbytes_per_sec": 0, 00:09:14.263 "r_mbytes_per_sec": 0, 00:09:14.263 "w_mbytes_per_sec": 0 00:09:14.263 }, 00:09:14.263 "claimed": true, 00:09:14.263 "claim_type": "exclusive_write", 00:09:14.263 "zoned": false, 00:09:14.263 "supported_io_types": { 00:09:14.263 "read": true, 00:09:14.263 "write": true, 00:09:14.263 "unmap": true, 00:09:14.263 "flush": true, 00:09:14.263 "reset": true, 00:09:14.263 "nvme_admin": false, 00:09:14.263 "nvme_io": false, 00:09:14.263 "nvme_io_md": false, 00:09:14.263 "write_zeroes": true, 00:09:14.263 "zcopy": true, 00:09:14.263 "get_zone_info": false, 00:09:14.263 "zone_management": false, 00:09:14.263 "zone_append": false, 00:09:14.263 "compare": false, 00:09:14.263 "compare_and_write": false, 00:09:14.263 "abort": true, 00:09:14.264 "seek_hole": false, 00:09:14.264 "seek_data": false, 00:09:14.264 "copy": true, 00:09:14.264 "nvme_iov_md": false 00:09:14.264 }, 00:09:14.264 "memory_domains": [ 00:09:14.264 { 00:09:14.264 "dma_device_id": "system", 00:09:14.264 "dma_device_type": 1 00:09:14.264 }, 00:09:14.264 { 00:09:14.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.264 "dma_device_type": 2 00:09:14.264 } 00:09:14.264 ], 00:09:14.264 "driver_specific": {} 00:09:14.264 } 00:09:14.264 ] 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.264 "name": "Existed_Raid", 00:09:14.264 "uuid": "4497d652-523d-40e0-aed1-99cc49b22cae", 00:09:14.264 "strip_size_kb": 64, 00:09:14.264 "state": "configuring", 00:09:14.264 "raid_level": "raid0", 00:09:14.264 "superblock": true, 00:09:14.264 "num_base_bdevs": 3, 00:09:14.264 "num_base_bdevs_discovered": 1, 00:09:14.264 "num_base_bdevs_operational": 3, 00:09:14.264 "base_bdevs_list": [ 00:09:14.264 { 00:09:14.264 "name": "BaseBdev1", 00:09:14.264 "uuid": "a64f0fe0-ad20-46ad-906a-33639d6667a3", 00:09:14.264 "is_configured": true, 00:09:14.264 "data_offset": 2048, 00:09:14.264 "data_size": 63488 00:09:14.264 }, 00:09:14.264 { 00:09:14.264 "name": "BaseBdev2", 00:09:14.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.264 "is_configured": false, 00:09:14.264 "data_offset": 0, 00:09:14.264 "data_size": 0 00:09:14.264 }, 00:09:14.264 { 00:09:14.264 "name": "BaseBdev3", 00:09:14.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.264 "is_configured": false, 00:09:14.264 "data_offset": 0, 00:09:14.264 "data_size": 0 00:09:14.264 } 00:09:14.264 ] 00:09:14.264 }' 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.264 20:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.829 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.829 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.829 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.829 [2024-11-26 20:22:08.080423] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.829 [2024-11-26 20:22:08.080546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:14.829 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.829 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.829 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.829 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.829 [2024-11-26 20:22:08.092452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.830 [2024-11-26 20:22:08.094584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.830 [2024-11-26 20:22:08.094684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.830 [2024-11-26 20:22:08.094734] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.830 [2024-11-26 20:22:08.094763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.830 "name": "Existed_Raid", 00:09:14.830 "uuid": "f51e9fce-ef32-4b51-9845-4aaeb5a3896a", 00:09:14.830 "strip_size_kb": 64, 00:09:14.830 "state": "configuring", 00:09:14.830 "raid_level": "raid0", 00:09:14.830 "superblock": true, 00:09:14.830 "num_base_bdevs": 3, 00:09:14.830 "num_base_bdevs_discovered": 1, 00:09:14.830 "num_base_bdevs_operational": 3, 00:09:14.830 "base_bdevs_list": [ 00:09:14.830 { 00:09:14.830 "name": "BaseBdev1", 00:09:14.830 "uuid": "a64f0fe0-ad20-46ad-906a-33639d6667a3", 00:09:14.830 "is_configured": true, 00:09:14.830 "data_offset": 2048, 00:09:14.830 "data_size": 63488 00:09:14.830 }, 00:09:14.830 { 00:09:14.830 "name": "BaseBdev2", 00:09:14.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.830 "is_configured": false, 00:09:14.830 "data_offset": 0, 00:09:14.830 "data_size": 0 00:09:14.830 }, 00:09:14.830 { 00:09:14.830 "name": "BaseBdev3", 00:09:14.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.830 "is_configured": false, 00:09:14.830 "data_offset": 0, 00:09:14.830 "data_size": 0 00:09:14.830 } 00:09:14.830 ] 00:09:14.830 }' 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.830 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.088 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.089 [2024-11-26 20:22:08.603123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.089 BaseBdev2 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.089 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.089 [ 00:09:15.089 { 00:09:15.089 "name": "BaseBdev2", 00:09:15.089 "aliases": [ 00:09:15.089 "dae0682b-777b-4096-86d2-94bf6a7c8966" 00:09:15.089 ], 00:09:15.089 "product_name": "Malloc disk", 00:09:15.089 "block_size": 512, 00:09:15.089 "num_blocks": 65536, 00:09:15.089 "uuid": "dae0682b-777b-4096-86d2-94bf6a7c8966", 00:09:15.089 "assigned_rate_limits": { 00:09:15.089 "rw_ios_per_sec": 0, 00:09:15.089 "rw_mbytes_per_sec": 0, 00:09:15.089 "r_mbytes_per_sec": 0, 00:09:15.089 "w_mbytes_per_sec": 0 00:09:15.089 }, 00:09:15.089 "claimed": true, 00:09:15.089 "claim_type": "exclusive_write", 00:09:15.089 "zoned": false, 00:09:15.089 "supported_io_types": { 00:09:15.089 "read": true, 00:09:15.089 "write": true, 00:09:15.089 "unmap": true, 00:09:15.089 "flush": true, 00:09:15.089 "reset": true, 00:09:15.089 "nvme_admin": false, 00:09:15.089 "nvme_io": false, 00:09:15.089 "nvme_io_md": false, 00:09:15.089 "write_zeroes": true, 00:09:15.089 "zcopy": true, 00:09:15.089 "get_zone_info": false, 00:09:15.089 "zone_management": false, 00:09:15.089 "zone_append": false, 00:09:15.089 "compare": false, 00:09:15.089 "compare_and_write": false, 00:09:15.089 "abort": true, 00:09:15.089 "seek_hole": false, 00:09:15.089 "seek_data": false, 00:09:15.089 "copy": true, 00:09:15.089 "nvme_iov_md": false 00:09:15.089 }, 00:09:15.089 "memory_domains": [ 00:09:15.347 { 00:09:15.347 "dma_device_id": "system", 00:09:15.347 "dma_device_type": 1 00:09:15.347 }, 00:09:15.347 { 00:09:15.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.347 "dma_device_type": 2 00:09:15.347 } 00:09:15.347 ], 00:09:15.347 "driver_specific": {} 00:09:15.347 } 00:09:15.347 ] 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.347 "name": "Existed_Raid", 00:09:15.347 "uuid": "f51e9fce-ef32-4b51-9845-4aaeb5a3896a", 00:09:15.347 "strip_size_kb": 64, 00:09:15.347 "state": "configuring", 00:09:15.347 "raid_level": "raid0", 00:09:15.347 "superblock": true, 00:09:15.347 "num_base_bdevs": 3, 00:09:15.347 "num_base_bdevs_discovered": 2, 00:09:15.347 "num_base_bdevs_operational": 3, 00:09:15.347 "base_bdevs_list": [ 00:09:15.347 { 00:09:15.347 "name": "BaseBdev1", 00:09:15.347 "uuid": "a64f0fe0-ad20-46ad-906a-33639d6667a3", 00:09:15.347 "is_configured": true, 00:09:15.347 "data_offset": 2048, 00:09:15.347 "data_size": 63488 00:09:15.347 }, 00:09:15.347 { 00:09:15.347 "name": "BaseBdev2", 00:09:15.347 "uuid": "dae0682b-777b-4096-86d2-94bf6a7c8966", 00:09:15.347 "is_configured": true, 00:09:15.347 "data_offset": 2048, 00:09:15.347 "data_size": 63488 00:09:15.347 }, 00:09:15.347 { 00:09:15.347 "name": "BaseBdev3", 00:09:15.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.347 "is_configured": false, 00:09:15.347 "data_offset": 0, 00:09:15.347 "data_size": 0 00:09:15.347 } 00:09:15.347 ] 00:09:15.347 }' 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.347 20:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.605 [2024-11-26 20:22:09.088238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.605 [2024-11-26 20:22:09.088542] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:15.605 [2024-11-26 20:22:09.088607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.605 [2024-11-26 20:22:09.088984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:15.605 BaseBdev3 00:09:15.605 [2024-11-26 20:22:09.089192] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:15.605 [2024-11-26 20:22:09.089206] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:15.605 [2024-11-26 20:22:09.089335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.605 [ 00:09:15.605 { 00:09:15.605 "name": "BaseBdev3", 00:09:15.605 "aliases": [ 00:09:15.605 "4c7b8cf6-4f9b-4996-bac8-41fdc74f0a73" 00:09:15.605 ], 00:09:15.605 "product_name": "Malloc disk", 00:09:15.605 "block_size": 512, 00:09:15.605 "num_blocks": 65536, 00:09:15.605 "uuid": "4c7b8cf6-4f9b-4996-bac8-41fdc74f0a73", 00:09:15.605 "assigned_rate_limits": { 00:09:15.605 "rw_ios_per_sec": 0, 00:09:15.605 "rw_mbytes_per_sec": 0, 00:09:15.605 "r_mbytes_per_sec": 0, 00:09:15.605 "w_mbytes_per_sec": 0 00:09:15.605 }, 00:09:15.605 "claimed": true, 00:09:15.605 "claim_type": "exclusive_write", 00:09:15.605 "zoned": false, 00:09:15.605 "supported_io_types": { 00:09:15.605 "read": true, 00:09:15.605 "write": true, 00:09:15.605 "unmap": true, 00:09:15.605 "flush": true, 00:09:15.605 "reset": true, 00:09:15.605 "nvme_admin": false, 00:09:15.605 "nvme_io": false, 00:09:15.605 "nvme_io_md": false, 00:09:15.605 "write_zeroes": true, 00:09:15.605 "zcopy": true, 00:09:15.605 "get_zone_info": false, 00:09:15.605 "zone_management": false, 00:09:15.605 "zone_append": false, 00:09:15.605 "compare": false, 00:09:15.605 "compare_and_write": false, 00:09:15.605 "abort": true, 00:09:15.605 "seek_hole": false, 00:09:15.605 "seek_data": false, 00:09:15.605 "copy": true, 00:09:15.605 "nvme_iov_md": false 00:09:15.605 }, 00:09:15.605 "memory_domains": [ 00:09:15.605 { 00:09:15.605 "dma_device_id": "system", 00:09:15.605 "dma_device_type": 1 00:09:15.605 }, 00:09:15.605 { 00:09:15.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.605 "dma_device_type": 2 00:09:15.605 } 00:09:15.605 ], 00:09:15.605 "driver_specific": {} 00:09:15.605 } 00:09:15.605 ] 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.605 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.861 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.862 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.862 "name": "Existed_Raid", 00:09:15.862 "uuid": "f51e9fce-ef32-4b51-9845-4aaeb5a3896a", 00:09:15.862 "strip_size_kb": 64, 00:09:15.862 "state": "online", 00:09:15.862 "raid_level": "raid0", 00:09:15.862 "superblock": true, 00:09:15.862 "num_base_bdevs": 3, 00:09:15.862 "num_base_bdevs_discovered": 3, 00:09:15.862 "num_base_bdevs_operational": 3, 00:09:15.862 "base_bdevs_list": [ 00:09:15.862 { 00:09:15.862 "name": "BaseBdev1", 00:09:15.862 "uuid": "a64f0fe0-ad20-46ad-906a-33639d6667a3", 00:09:15.862 "is_configured": true, 00:09:15.862 "data_offset": 2048, 00:09:15.862 "data_size": 63488 00:09:15.862 }, 00:09:15.862 { 00:09:15.862 "name": "BaseBdev2", 00:09:15.862 "uuid": "dae0682b-777b-4096-86d2-94bf6a7c8966", 00:09:15.862 "is_configured": true, 00:09:15.862 "data_offset": 2048, 00:09:15.862 "data_size": 63488 00:09:15.862 }, 00:09:15.862 { 00:09:15.862 "name": "BaseBdev3", 00:09:15.862 "uuid": "4c7b8cf6-4f9b-4996-bac8-41fdc74f0a73", 00:09:15.862 "is_configured": true, 00:09:15.862 "data_offset": 2048, 00:09:15.862 "data_size": 63488 00:09:15.862 } 00:09:15.862 ] 00:09:15.862 }' 00:09:15.862 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.862 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.118 [2024-11-26 20:22:09.551928] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.118 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.118 "name": "Existed_Raid", 00:09:16.118 "aliases": [ 00:09:16.118 "f51e9fce-ef32-4b51-9845-4aaeb5a3896a" 00:09:16.118 ], 00:09:16.118 "product_name": "Raid Volume", 00:09:16.118 "block_size": 512, 00:09:16.118 "num_blocks": 190464, 00:09:16.118 "uuid": "f51e9fce-ef32-4b51-9845-4aaeb5a3896a", 00:09:16.118 "assigned_rate_limits": { 00:09:16.118 "rw_ios_per_sec": 0, 00:09:16.118 "rw_mbytes_per_sec": 0, 00:09:16.118 "r_mbytes_per_sec": 0, 00:09:16.118 "w_mbytes_per_sec": 0 00:09:16.118 }, 00:09:16.118 "claimed": false, 00:09:16.118 "zoned": false, 00:09:16.118 "supported_io_types": { 00:09:16.118 "read": true, 00:09:16.118 "write": true, 00:09:16.118 "unmap": true, 00:09:16.118 "flush": true, 00:09:16.118 "reset": true, 00:09:16.118 "nvme_admin": false, 00:09:16.118 "nvme_io": false, 00:09:16.118 "nvme_io_md": false, 00:09:16.118 "write_zeroes": true, 00:09:16.118 "zcopy": false, 00:09:16.118 "get_zone_info": false, 00:09:16.118 "zone_management": false, 00:09:16.118 "zone_append": false, 00:09:16.118 "compare": false, 00:09:16.118 "compare_and_write": false, 00:09:16.118 "abort": false, 00:09:16.118 "seek_hole": false, 00:09:16.118 "seek_data": false, 00:09:16.118 "copy": false, 00:09:16.118 "nvme_iov_md": false 00:09:16.118 }, 00:09:16.118 "memory_domains": [ 00:09:16.118 { 00:09:16.118 "dma_device_id": "system", 00:09:16.118 "dma_device_type": 1 00:09:16.118 }, 00:09:16.118 { 00:09:16.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.118 "dma_device_type": 2 00:09:16.118 }, 00:09:16.118 { 00:09:16.118 "dma_device_id": "system", 00:09:16.118 "dma_device_type": 1 00:09:16.118 }, 00:09:16.118 { 00:09:16.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.118 "dma_device_type": 2 00:09:16.118 }, 00:09:16.118 { 00:09:16.118 "dma_device_id": "system", 00:09:16.118 "dma_device_type": 1 00:09:16.118 }, 00:09:16.118 { 00:09:16.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.119 "dma_device_type": 2 00:09:16.119 } 00:09:16.119 ], 00:09:16.119 "driver_specific": { 00:09:16.119 "raid": { 00:09:16.119 "uuid": "f51e9fce-ef32-4b51-9845-4aaeb5a3896a", 00:09:16.119 "strip_size_kb": 64, 00:09:16.119 "state": "online", 00:09:16.119 "raid_level": "raid0", 00:09:16.119 "superblock": true, 00:09:16.119 "num_base_bdevs": 3, 00:09:16.119 "num_base_bdevs_discovered": 3, 00:09:16.119 "num_base_bdevs_operational": 3, 00:09:16.119 "base_bdevs_list": [ 00:09:16.119 { 00:09:16.119 "name": "BaseBdev1", 00:09:16.119 "uuid": "a64f0fe0-ad20-46ad-906a-33639d6667a3", 00:09:16.119 "is_configured": true, 00:09:16.119 "data_offset": 2048, 00:09:16.119 "data_size": 63488 00:09:16.119 }, 00:09:16.119 { 00:09:16.119 "name": "BaseBdev2", 00:09:16.119 "uuid": "dae0682b-777b-4096-86d2-94bf6a7c8966", 00:09:16.119 "is_configured": true, 00:09:16.119 "data_offset": 2048, 00:09:16.119 "data_size": 63488 00:09:16.119 }, 00:09:16.119 { 00:09:16.119 "name": "BaseBdev3", 00:09:16.119 "uuid": "4c7b8cf6-4f9b-4996-bac8-41fdc74f0a73", 00:09:16.119 "is_configured": true, 00:09:16.119 "data_offset": 2048, 00:09:16.119 "data_size": 63488 00:09:16.119 } 00:09:16.119 ] 00:09:16.119 } 00:09:16.119 } 00:09:16.119 }' 00:09:16.119 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.119 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.119 BaseBdev2 00:09:16.119 BaseBdev3' 00:09:16.119 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.376 [2024-11-26 20:22:09.839189] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.376 [2024-11-26 20:22:09.839292] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.376 [2024-11-26 20:22:09.839389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.376 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.377 "name": "Existed_Raid", 00:09:16.377 "uuid": "f51e9fce-ef32-4b51-9845-4aaeb5a3896a", 00:09:16.377 "strip_size_kb": 64, 00:09:16.377 "state": "offline", 00:09:16.377 "raid_level": "raid0", 00:09:16.377 "superblock": true, 00:09:16.377 "num_base_bdevs": 3, 00:09:16.377 "num_base_bdevs_discovered": 2, 00:09:16.377 "num_base_bdevs_operational": 2, 00:09:16.377 "base_bdevs_list": [ 00:09:16.377 { 00:09:16.377 "name": null, 00:09:16.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.377 "is_configured": false, 00:09:16.377 "data_offset": 0, 00:09:16.377 "data_size": 63488 00:09:16.377 }, 00:09:16.377 { 00:09:16.377 "name": "BaseBdev2", 00:09:16.377 "uuid": "dae0682b-777b-4096-86d2-94bf6a7c8966", 00:09:16.377 "is_configured": true, 00:09:16.377 "data_offset": 2048, 00:09:16.377 "data_size": 63488 00:09:16.377 }, 00:09:16.377 { 00:09:16.377 "name": "BaseBdev3", 00:09:16.377 "uuid": "4c7b8cf6-4f9b-4996-bac8-41fdc74f0a73", 00:09:16.377 "is_configured": true, 00:09:16.377 "data_offset": 2048, 00:09:16.377 "data_size": 63488 00:09:16.377 } 00:09:16.377 ] 00:09:16.377 }' 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.377 20:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.942 [2024-11-26 20:22:10.382388] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.942 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.943 [2024-11-26 20:22:10.438490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:16.943 [2024-11-26 20:22:10.438554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:16.943 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.943 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.943 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.943 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.943 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:16.943 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.943 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.943 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.201 BaseBdev2 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.201 [ 00:09:17.201 { 00:09:17.201 "name": "BaseBdev2", 00:09:17.201 "aliases": [ 00:09:17.201 "88ad5d54-6e6f-4e87-b4c1-222791094403" 00:09:17.201 ], 00:09:17.201 "product_name": "Malloc disk", 00:09:17.201 "block_size": 512, 00:09:17.201 "num_blocks": 65536, 00:09:17.201 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:17.201 "assigned_rate_limits": { 00:09:17.201 "rw_ios_per_sec": 0, 00:09:17.201 "rw_mbytes_per_sec": 0, 00:09:17.201 "r_mbytes_per_sec": 0, 00:09:17.201 "w_mbytes_per_sec": 0 00:09:17.201 }, 00:09:17.201 "claimed": false, 00:09:17.201 "zoned": false, 00:09:17.201 "supported_io_types": { 00:09:17.201 "read": true, 00:09:17.201 "write": true, 00:09:17.201 "unmap": true, 00:09:17.201 "flush": true, 00:09:17.201 "reset": true, 00:09:17.201 "nvme_admin": false, 00:09:17.201 "nvme_io": false, 00:09:17.201 "nvme_io_md": false, 00:09:17.201 "write_zeroes": true, 00:09:17.201 "zcopy": true, 00:09:17.201 "get_zone_info": false, 00:09:17.201 "zone_management": false, 00:09:17.201 "zone_append": false, 00:09:17.201 "compare": false, 00:09:17.201 "compare_and_write": false, 00:09:17.201 "abort": true, 00:09:17.201 "seek_hole": false, 00:09:17.201 "seek_data": false, 00:09:17.201 "copy": true, 00:09:17.201 "nvme_iov_md": false 00:09:17.201 }, 00:09:17.201 "memory_domains": [ 00:09:17.201 { 00:09:17.201 "dma_device_id": "system", 00:09:17.201 "dma_device_type": 1 00:09:17.201 }, 00:09:17.201 { 00:09:17.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.201 "dma_device_type": 2 00:09:17.201 } 00:09:17.201 ], 00:09:17.201 "driver_specific": {} 00:09:17.201 } 00:09:17.201 ] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.201 BaseBdev3 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.201 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.202 [ 00:09:17.202 { 00:09:17.202 "name": "BaseBdev3", 00:09:17.202 "aliases": [ 00:09:17.202 "75925145-16f7-4969-bef8-ec41452394ca" 00:09:17.202 ], 00:09:17.202 "product_name": "Malloc disk", 00:09:17.202 "block_size": 512, 00:09:17.202 "num_blocks": 65536, 00:09:17.202 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:17.202 "assigned_rate_limits": { 00:09:17.202 "rw_ios_per_sec": 0, 00:09:17.202 "rw_mbytes_per_sec": 0, 00:09:17.202 "r_mbytes_per_sec": 0, 00:09:17.202 "w_mbytes_per_sec": 0 00:09:17.202 }, 00:09:17.202 "claimed": false, 00:09:17.202 "zoned": false, 00:09:17.202 "supported_io_types": { 00:09:17.202 "read": true, 00:09:17.202 "write": true, 00:09:17.202 "unmap": true, 00:09:17.202 "flush": true, 00:09:17.202 "reset": true, 00:09:17.202 "nvme_admin": false, 00:09:17.202 "nvme_io": false, 00:09:17.202 "nvme_io_md": false, 00:09:17.202 "write_zeroes": true, 00:09:17.202 "zcopy": true, 00:09:17.202 "get_zone_info": false, 00:09:17.202 "zone_management": false, 00:09:17.202 "zone_append": false, 00:09:17.202 "compare": false, 00:09:17.202 "compare_and_write": false, 00:09:17.202 "abort": true, 00:09:17.202 "seek_hole": false, 00:09:17.202 "seek_data": false, 00:09:17.202 "copy": true, 00:09:17.202 "nvme_iov_md": false 00:09:17.202 }, 00:09:17.202 "memory_domains": [ 00:09:17.202 { 00:09:17.202 "dma_device_id": "system", 00:09:17.202 "dma_device_type": 1 00:09:17.202 }, 00:09:17.202 { 00:09:17.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.202 "dma_device_type": 2 00:09:17.202 } 00:09:17.202 ], 00:09:17.202 "driver_specific": {} 00:09:17.202 } 00:09:17.202 ] 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.202 [2024-11-26 20:22:10.645507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.202 [2024-11-26 20:22:10.645679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.202 [2024-11-26 20:22:10.645744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.202 [2024-11-26 20:22:10.648153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.202 "name": "Existed_Raid", 00:09:17.202 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:17.202 "strip_size_kb": 64, 00:09:17.202 "state": "configuring", 00:09:17.202 "raid_level": "raid0", 00:09:17.202 "superblock": true, 00:09:17.202 "num_base_bdevs": 3, 00:09:17.202 "num_base_bdevs_discovered": 2, 00:09:17.202 "num_base_bdevs_operational": 3, 00:09:17.202 "base_bdevs_list": [ 00:09:17.202 { 00:09:17.202 "name": "BaseBdev1", 00:09:17.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.202 "is_configured": false, 00:09:17.202 "data_offset": 0, 00:09:17.202 "data_size": 0 00:09:17.202 }, 00:09:17.202 { 00:09:17.202 "name": "BaseBdev2", 00:09:17.202 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:17.202 "is_configured": true, 00:09:17.202 "data_offset": 2048, 00:09:17.202 "data_size": 63488 00:09:17.202 }, 00:09:17.202 { 00:09:17.202 "name": "BaseBdev3", 00:09:17.202 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:17.202 "is_configured": true, 00:09:17.202 "data_offset": 2048, 00:09:17.202 "data_size": 63488 00:09:17.202 } 00:09:17.202 ] 00:09:17.202 }' 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.202 20:22:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.831 [2024-11-26 20:22:11.092741] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.831 "name": "Existed_Raid", 00:09:17.831 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:17.831 "strip_size_kb": 64, 00:09:17.831 "state": "configuring", 00:09:17.831 "raid_level": "raid0", 00:09:17.831 "superblock": true, 00:09:17.831 "num_base_bdevs": 3, 00:09:17.831 "num_base_bdevs_discovered": 1, 00:09:17.831 "num_base_bdevs_operational": 3, 00:09:17.831 "base_bdevs_list": [ 00:09:17.831 { 00:09:17.831 "name": "BaseBdev1", 00:09:17.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.831 "is_configured": false, 00:09:17.831 "data_offset": 0, 00:09:17.831 "data_size": 0 00:09:17.831 }, 00:09:17.831 { 00:09:17.831 "name": null, 00:09:17.831 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:17.831 "is_configured": false, 00:09:17.831 "data_offset": 0, 00:09:17.831 "data_size": 63488 00:09:17.831 }, 00:09:17.831 { 00:09:17.831 "name": "BaseBdev3", 00:09:17.831 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:17.831 "is_configured": true, 00:09:17.831 "data_offset": 2048, 00:09:17.831 "data_size": 63488 00:09:17.831 } 00:09:17.831 ] 00:09:17.831 }' 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.831 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.090 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.090 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.090 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.090 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.090 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.349 [2024-11-26 20:22:11.667671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.349 BaseBdev1 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.349 [ 00:09:18.349 { 00:09:18.349 "name": "BaseBdev1", 00:09:18.349 "aliases": [ 00:09:18.349 "1e5d1bc6-15da-4fb1-8f63-71f13607b548" 00:09:18.349 ], 00:09:18.349 "product_name": "Malloc disk", 00:09:18.349 "block_size": 512, 00:09:18.349 "num_blocks": 65536, 00:09:18.349 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:18.349 "assigned_rate_limits": { 00:09:18.349 "rw_ios_per_sec": 0, 00:09:18.349 "rw_mbytes_per_sec": 0, 00:09:18.349 "r_mbytes_per_sec": 0, 00:09:18.349 "w_mbytes_per_sec": 0 00:09:18.349 }, 00:09:18.349 "claimed": true, 00:09:18.349 "claim_type": "exclusive_write", 00:09:18.349 "zoned": false, 00:09:18.349 "supported_io_types": { 00:09:18.349 "read": true, 00:09:18.349 "write": true, 00:09:18.349 "unmap": true, 00:09:18.349 "flush": true, 00:09:18.349 "reset": true, 00:09:18.349 "nvme_admin": false, 00:09:18.349 "nvme_io": false, 00:09:18.349 "nvme_io_md": false, 00:09:18.349 "write_zeroes": true, 00:09:18.349 "zcopy": true, 00:09:18.349 "get_zone_info": false, 00:09:18.349 "zone_management": false, 00:09:18.349 "zone_append": false, 00:09:18.349 "compare": false, 00:09:18.349 "compare_and_write": false, 00:09:18.349 "abort": true, 00:09:18.349 "seek_hole": false, 00:09:18.349 "seek_data": false, 00:09:18.349 "copy": true, 00:09:18.349 "nvme_iov_md": false 00:09:18.349 }, 00:09:18.349 "memory_domains": [ 00:09:18.349 { 00:09:18.349 "dma_device_id": "system", 00:09:18.349 "dma_device_type": 1 00:09:18.349 }, 00:09:18.349 { 00:09:18.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.349 "dma_device_type": 2 00:09:18.349 } 00:09:18.349 ], 00:09:18.349 "driver_specific": {} 00:09:18.349 } 00:09:18.349 ] 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.349 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.350 "name": "Existed_Raid", 00:09:18.350 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:18.350 "strip_size_kb": 64, 00:09:18.350 "state": "configuring", 00:09:18.350 "raid_level": "raid0", 00:09:18.350 "superblock": true, 00:09:18.350 "num_base_bdevs": 3, 00:09:18.350 "num_base_bdevs_discovered": 2, 00:09:18.350 "num_base_bdevs_operational": 3, 00:09:18.350 "base_bdevs_list": [ 00:09:18.350 { 00:09:18.350 "name": "BaseBdev1", 00:09:18.350 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:18.350 "is_configured": true, 00:09:18.350 "data_offset": 2048, 00:09:18.350 "data_size": 63488 00:09:18.350 }, 00:09:18.350 { 00:09:18.350 "name": null, 00:09:18.350 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:18.350 "is_configured": false, 00:09:18.350 "data_offset": 0, 00:09:18.350 "data_size": 63488 00:09:18.350 }, 00:09:18.350 { 00:09:18.350 "name": "BaseBdev3", 00:09:18.350 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:18.350 "is_configured": true, 00:09:18.350 "data_offset": 2048, 00:09:18.350 "data_size": 63488 00:09:18.350 } 00:09:18.350 ] 00:09:18.350 }' 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.350 20:22:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.915 [2024-11-26 20:22:12.226774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.915 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.916 "name": "Existed_Raid", 00:09:18.916 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:18.916 "strip_size_kb": 64, 00:09:18.916 "state": "configuring", 00:09:18.916 "raid_level": "raid0", 00:09:18.916 "superblock": true, 00:09:18.916 "num_base_bdevs": 3, 00:09:18.916 "num_base_bdevs_discovered": 1, 00:09:18.916 "num_base_bdevs_operational": 3, 00:09:18.916 "base_bdevs_list": [ 00:09:18.916 { 00:09:18.916 "name": "BaseBdev1", 00:09:18.916 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:18.916 "is_configured": true, 00:09:18.916 "data_offset": 2048, 00:09:18.916 "data_size": 63488 00:09:18.916 }, 00:09:18.916 { 00:09:18.916 "name": null, 00:09:18.916 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:18.916 "is_configured": false, 00:09:18.916 "data_offset": 0, 00:09:18.916 "data_size": 63488 00:09:18.916 }, 00:09:18.916 { 00:09:18.916 "name": null, 00:09:18.916 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:18.916 "is_configured": false, 00:09:18.916 "data_offset": 0, 00:09:18.916 "data_size": 63488 00:09:18.916 } 00:09:18.916 ] 00:09:18.916 }' 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.916 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.174 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.174 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.174 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.174 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.432 [2024-11-26 20:22:12.753939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.432 "name": "Existed_Raid", 00:09:19.432 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:19.432 "strip_size_kb": 64, 00:09:19.432 "state": "configuring", 00:09:19.432 "raid_level": "raid0", 00:09:19.432 "superblock": true, 00:09:19.432 "num_base_bdevs": 3, 00:09:19.432 "num_base_bdevs_discovered": 2, 00:09:19.432 "num_base_bdevs_operational": 3, 00:09:19.432 "base_bdevs_list": [ 00:09:19.432 { 00:09:19.432 "name": "BaseBdev1", 00:09:19.432 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:19.432 "is_configured": true, 00:09:19.432 "data_offset": 2048, 00:09:19.432 "data_size": 63488 00:09:19.432 }, 00:09:19.432 { 00:09:19.432 "name": null, 00:09:19.432 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:19.432 "is_configured": false, 00:09:19.432 "data_offset": 0, 00:09:19.432 "data_size": 63488 00:09:19.432 }, 00:09:19.432 { 00:09:19.432 "name": "BaseBdev3", 00:09:19.432 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:19.432 "is_configured": true, 00:09:19.432 "data_offset": 2048, 00:09:19.432 "data_size": 63488 00:09:19.432 } 00:09:19.432 ] 00:09:19.432 }' 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.432 20:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.690 [2024-11-26 20:22:13.201195] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.690 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.691 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.691 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.691 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.691 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.691 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.691 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.950 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.950 "name": "Existed_Raid", 00:09:19.950 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:19.950 "strip_size_kb": 64, 00:09:19.950 "state": "configuring", 00:09:19.950 "raid_level": "raid0", 00:09:19.950 "superblock": true, 00:09:19.950 "num_base_bdevs": 3, 00:09:19.950 "num_base_bdevs_discovered": 1, 00:09:19.950 "num_base_bdevs_operational": 3, 00:09:19.950 "base_bdevs_list": [ 00:09:19.950 { 00:09:19.950 "name": null, 00:09:19.950 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:19.950 "is_configured": false, 00:09:19.950 "data_offset": 0, 00:09:19.950 "data_size": 63488 00:09:19.950 }, 00:09:19.950 { 00:09:19.950 "name": null, 00:09:19.950 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:19.950 "is_configured": false, 00:09:19.950 "data_offset": 0, 00:09:19.950 "data_size": 63488 00:09:19.950 }, 00:09:19.950 { 00:09:19.950 "name": "BaseBdev3", 00:09:19.950 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:19.950 "is_configured": true, 00:09:19.950 "data_offset": 2048, 00:09:19.950 "data_size": 63488 00:09:19.950 } 00:09:19.950 ] 00:09:19.950 }' 00:09:19.950 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.950 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 [2024-11-26 20:22:13.715248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.210 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.470 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.470 "name": "Existed_Raid", 00:09:20.470 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:20.470 "strip_size_kb": 64, 00:09:20.470 "state": "configuring", 00:09:20.470 "raid_level": "raid0", 00:09:20.470 "superblock": true, 00:09:20.470 "num_base_bdevs": 3, 00:09:20.470 "num_base_bdevs_discovered": 2, 00:09:20.470 "num_base_bdevs_operational": 3, 00:09:20.470 "base_bdevs_list": [ 00:09:20.470 { 00:09:20.470 "name": null, 00:09:20.470 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:20.470 "is_configured": false, 00:09:20.470 "data_offset": 0, 00:09:20.470 "data_size": 63488 00:09:20.470 }, 00:09:20.470 { 00:09:20.470 "name": "BaseBdev2", 00:09:20.470 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:20.470 "is_configured": true, 00:09:20.470 "data_offset": 2048, 00:09:20.470 "data_size": 63488 00:09:20.470 }, 00:09:20.470 { 00:09:20.470 "name": "BaseBdev3", 00:09:20.470 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:20.470 "is_configured": true, 00:09:20.470 "data_offset": 2048, 00:09:20.470 "data_size": 63488 00:09:20.470 } 00:09:20.470 ] 00:09:20.470 }' 00:09:20.470 20:22:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.470 20:22:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e5d1bc6-15da-4fb1-8f63-71f13607b548 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.731 NewBaseBdev 00:09:20.731 [2024-11-26 20:22:14.213714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:20.731 [2024-11-26 20:22:14.213897] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:20.731 [2024-11-26 20:22:14.213917] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.731 [2024-11-26 20:22:14.214199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:20.731 [2024-11-26 20:22:14.214330] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:20.731 [2024-11-26 20:22:14.214340] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:20.731 [2024-11-26 20:22:14.214459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.731 [ 00:09:20.731 { 00:09:20.731 "name": "NewBaseBdev", 00:09:20.731 "aliases": [ 00:09:20.731 "1e5d1bc6-15da-4fb1-8f63-71f13607b548" 00:09:20.731 ], 00:09:20.731 "product_name": "Malloc disk", 00:09:20.731 "block_size": 512, 00:09:20.731 "num_blocks": 65536, 00:09:20.731 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:20.731 "assigned_rate_limits": { 00:09:20.731 "rw_ios_per_sec": 0, 00:09:20.731 "rw_mbytes_per_sec": 0, 00:09:20.731 "r_mbytes_per_sec": 0, 00:09:20.731 "w_mbytes_per_sec": 0 00:09:20.731 }, 00:09:20.731 "claimed": true, 00:09:20.731 "claim_type": "exclusive_write", 00:09:20.731 "zoned": false, 00:09:20.731 "supported_io_types": { 00:09:20.731 "read": true, 00:09:20.731 "write": true, 00:09:20.731 "unmap": true, 00:09:20.731 "flush": true, 00:09:20.731 "reset": true, 00:09:20.731 "nvme_admin": false, 00:09:20.731 "nvme_io": false, 00:09:20.731 "nvme_io_md": false, 00:09:20.731 "write_zeroes": true, 00:09:20.731 "zcopy": true, 00:09:20.731 "get_zone_info": false, 00:09:20.731 "zone_management": false, 00:09:20.731 "zone_append": false, 00:09:20.731 "compare": false, 00:09:20.731 "compare_and_write": false, 00:09:20.731 "abort": true, 00:09:20.731 "seek_hole": false, 00:09:20.731 "seek_data": false, 00:09:20.731 "copy": true, 00:09:20.731 "nvme_iov_md": false 00:09:20.731 }, 00:09:20.731 "memory_domains": [ 00:09:20.731 { 00:09:20.731 "dma_device_id": "system", 00:09:20.731 "dma_device_type": 1 00:09:20.731 }, 00:09:20.731 { 00:09:20.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.731 "dma_device_type": 2 00:09:20.731 } 00:09:20.731 ], 00:09:20.731 "driver_specific": {} 00:09:20.731 } 00:09:20.731 ] 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.731 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.991 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.991 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.991 "name": "Existed_Raid", 00:09:20.991 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:20.991 "strip_size_kb": 64, 00:09:20.991 "state": "online", 00:09:20.991 "raid_level": "raid0", 00:09:20.991 "superblock": true, 00:09:20.991 "num_base_bdevs": 3, 00:09:20.991 "num_base_bdevs_discovered": 3, 00:09:20.991 "num_base_bdevs_operational": 3, 00:09:20.991 "base_bdevs_list": [ 00:09:20.991 { 00:09:20.991 "name": "NewBaseBdev", 00:09:20.991 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:20.991 "is_configured": true, 00:09:20.991 "data_offset": 2048, 00:09:20.991 "data_size": 63488 00:09:20.991 }, 00:09:20.991 { 00:09:20.991 "name": "BaseBdev2", 00:09:20.991 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:20.991 "is_configured": true, 00:09:20.991 "data_offset": 2048, 00:09:20.991 "data_size": 63488 00:09:20.991 }, 00:09:20.991 { 00:09:20.991 "name": "BaseBdev3", 00:09:20.991 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:20.991 "is_configured": true, 00:09:20.991 "data_offset": 2048, 00:09:20.991 "data_size": 63488 00:09:20.991 } 00:09:20.991 ] 00:09:20.991 }' 00:09:20.991 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.991 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.306 [2024-11-26 20:22:14.769222] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.306 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.306 "name": "Existed_Raid", 00:09:21.306 "aliases": [ 00:09:21.306 "ef6d229f-2e78-4662-a488-f48573b49167" 00:09:21.306 ], 00:09:21.306 "product_name": "Raid Volume", 00:09:21.306 "block_size": 512, 00:09:21.306 "num_blocks": 190464, 00:09:21.307 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:21.307 "assigned_rate_limits": { 00:09:21.307 "rw_ios_per_sec": 0, 00:09:21.307 "rw_mbytes_per_sec": 0, 00:09:21.307 "r_mbytes_per_sec": 0, 00:09:21.307 "w_mbytes_per_sec": 0 00:09:21.307 }, 00:09:21.307 "claimed": false, 00:09:21.307 "zoned": false, 00:09:21.307 "supported_io_types": { 00:09:21.307 "read": true, 00:09:21.307 "write": true, 00:09:21.307 "unmap": true, 00:09:21.307 "flush": true, 00:09:21.307 "reset": true, 00:09:21.307 "nvme_admin": false, 00:09:21.307 "nvme_io": false, 00:09:21.307 "nvme_io_md": false, 00:09:21.307 "write_zeroes": true, 00:09:21.307 "zcopy": false, 00:09:21.307 "get_zone_info": false, 00:09:21.307 "zone_management": false, 00:09:21.307 "zone_append": false, 00:09:21.307 "compare": false, 00:09:21.307 "compare_and_write": false, 00:09:21.307 "abort": false, 00:09:21.307 "seek_hole": false, 00:09:21.307 "seek_data": false, 00:09:21.307 "copy": false, 00:09:21.307 "nvme_iov_md": false 00:09:21.307 }, 00:09:21.307 "memory_domains": [ 00:09:21.307 { 00:09:21.307 "dma_device_id": "system", 00:09:21.307 "dma_device_type": 1 00:09:21.307 }, 00:09:21.307 { 00:09:21.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.307 "dma_device_type": 2 00:09:21.307 }, 00:09:21.307 { 00:09:21.307 "dma_device_id": "system", 00:09:21.307 "dma_device_type": 1 00:09:21.307 }, 00:09:21.307 { 00:09:21.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.307 "dma_device_type": 2 00:09:21.307 }, 00:09:21.307 { 00:09:21.307 "dma_device_id": "system", 00:09:21.307 "dma_device_type": 1 00:09:21.307 }, 00:09:21.307 { 00:09:21.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.307 "dma_device_type": 2 00:09:21.307 } 00:09:21.307 ], 00:09:21.307 "driver_specific": { 00:09:21.307 "raid": { 00:09:21.307 "uuid": "ef6d229f-2e78-4662-a488-f48573b49167", 00:09:21.307 "strip_size_kb": 64, 00:09:21.307 "state": "online", 00:09:21.307 "raid_level": "raid0", 00:09:21.307 "superblock": true, 00:09:21.307 "num_base_bdevs": 3, 00:09:21.307 "num_base_bdevs_discovered": 3, 00:09:21.307 "num_base_bdevs_operational": 3, 00:09:21.307 "base_bdevs_list": [ 00:09:21.307 { 00:09:21.307 "name": "NewBaseBdev", 00:09:21.307 "uuid": "1e5d1bc6-15da-4fb1-8f63-71f13607b548", 00:09:21.307 "is_configured": true, 00:09:21.307 "data_offset": 2048, 00:09:21.307 "data_size": 63488 00:09:21.307 }, 00:09:21.307 { 00:09:21.307 "name": "BaseBdev2", 00:09:21.307 "uuid": "88ad5d54-6e6f-4e87-b4c1-222791094403", 00:09:21.307 "is_configured": true, 00:09:21.307 "data_offset": 2048, 00:09:21.307 "data_size": 63488 00:09:21.307 }, 00:09:21.307 { 00:09:21.307 "name": "BaseBdev3", 00:09:21.307 "uuid": "75925145-16f7-4969-bef8-ec41452394ca", 00:09:21.307 "is_configured": true, 00:09:21.307 "data_offset": 2048, 00:09:21.307 "data_size": 63488 00:09:21.307 } 00:09:21.307 ] 00:09:21.307 } 00:09:21.307 } 00:09:21.307 }' 00:09:21.307 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.624 BaseBdev2 00:09:21.624 BaseBdev3' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.624 20:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.624 [2024-11-26 20:22:15.024514] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.624 [2024-11-26 20:22:15.024547] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.624 [2024-11-26 20:22:15.024649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.624 [2024-11-26 20:22:15.024724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.624 [2024-11-26 20:22:15.024749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 76044 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 76044 ']' 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 76044 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76044 00:09:21.624 killing process with pid 76044 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76044' 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 76044 00:09:21.624 [2024-11-26 20:22:15.075226] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.624 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 76044 00:09:21.624 [2024-11-26 20:22:15.120973] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.192 20:22:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:22.192 00:09:22.192 real 0m9.392s 00:09:22.192 user 0m15.791s 00:09:22.192 sys 0m1.974s 00:09:22.192 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.192 20:22:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.192 ************************************ 00:09:22.193 END TEST raid_state_function_test_sb 00:09:22.193 ************************************ 00:09:22.193 20:22:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:22.193 20:22:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:22.193 20:22:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.193 20:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 ************************************ 00:09:22.193 START TEST raid_superblock_test 00:09:22.193 ************************************ 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76653 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76653 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76653 ']' 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.193 20:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 [2024-11-26 20:22:15.664581] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:22.193 [2024-11-26 20:22:15.664752] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76653 ] 00:09:22.451 [2024-11-26 20:22:15.824508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.451 [2024-11-26 20:22:15.916104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.451 [2024-11-26 20:22:15.995356] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.451 [2024-11-26 20:22:15.995397] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.387 malloc1 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.387 [2024-11-26 20:22:16.601609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:23.387 [2024-11-26 20:22:16.601809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.387 [2024-11-26 20:22:16.601860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:23.387 [2024-11-26 20:22:16.601904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.387 [2024-11-26 20:22:16.604582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.387 [2024-11-26 20:22:16.604716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:23.387 pt1 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.387 malloc2 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.387 [2024-11-26 20:22:16.651471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.387 [2024-11-26 20:22:16.651549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.387 [2024-11-26 20:22:16.651569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:23.387 [2024-11-26 20:22:16.651580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.387 [2024-11-26 20:22:16.654149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.387 [2024-11-26 20:22:16.654194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.387 pt2 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:23.387 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.388 malloc3 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.388 [2024-11-26 20:22:16.681222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:23.388 [2024-11-26 20:22:16.681400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.388 [2024-11-26 20:22:16.681447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:23.388 [2024-11-26 20:22:16.681483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.388 [2024-11-26 20:22:16.684174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.388 [2024-11-26 20:22:16.684285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:23.388 pt3 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.388 [2024-11-26 20:22:16.693273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:23.388 [2024-11-26 20:22:16.695618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:23.388 [2024-11-26 20:22:16.695786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:23.388 [2024-11-26 20:22:16.695999] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:23.388 [2024-11-26 20:22:16.696060] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:23.388 [2024-11-26 20:22:16.696432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:23.388 [2024-11-26 20:22:16.696685] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:23.388 [2024-11-26 20:22:16.696742] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:23.388 [2024-11-26 20:22:16.696971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.388 "name": "raid_bdev1", 00:09:23.388 "uuid": "fb100476-8222-4048-9660-74c50b5763d2", 00:09:23.388 "strip_size_kb": 64, 00:09:23.388 "state": "online", 00:09:23.388 "raid_level": "raid0", 00:09:23.388 "superblock": true, 00:09:23.388 "num_base_bdevs": 3, 00:09:23.388 "num_base_bdevs_discovered": 3, 00:09:23.388 "num_base_bdevs_operational": 3, 00:09:23.388 "base_bdevs_list": [ 00:09:23.388 { 00:09:23.388 "name": "pt1", 00:09:23.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.388 "is_configured": true, 00:09:23.388 "data_offset": 2048, 00:09:23.388 "data_size": 63488 00:09:23.388 }, 00:09:23.388 { 00:09:23.388 "name": "pt2", 00:09:23.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.388 "is_configured": true, 00:09:23.388 "data_offset": 2048, 00:09:23.388 "data_size": 63488 00:09:23.388 }, 00:09:23.388 { 00:09:23.388 "name": "pt3", 00:09:23.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.388 "is_configured": true, 00:09:23.388 "data_offset": 2048, 00:09:23.388 "data_size": 63488 00:09:23.388 } 00:09:23.388 ] 00:09:23.388 }' 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.388 20:22:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.646 [2024-11-26 20:22:17.172808] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.646 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.904 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.904 "name": "raid_bdev1", 00:09:23.904 "aliases": [ 00:09:23.904 "fb100476-8222-4048-9660-74c50b5763d2" 00:09:23.904 ], 00:09:23.904 "product_name": "Raid Volume", 00:09:23.904 "block_size": 512, 00:09:23.904 "num_blocks": 190464, 00:09:23.904 "uuid": "fb100476-8222-4048-9660-74c50b5763d2", 00:09:23.904 "assigned_rate_limits": { 00:09:23.904 "rw_ios_per_sec": 0, 00:09:23.904 "rw_mbytes_per_sec": 0, 00:09:23.904 "r_mbytes_per_sec": 0, 00:09:23.904 "w_mbytes_per_sec": 0 00:09:23.904 }, 00:09:23.904 "claimed": false, 00:09:23.904 "zoned": false, 00:09:23.904 "supported_io_types": { 00:09:23.904 "read": true, 00:09:23.904 "write": true, 00:09:23.904 "unmap": true, 00:09:23.904 "flush": true, 00:09:23.904 "reset": true, 00:09:23.904 "nvme_admin": false, 00:09:23.904 "nvme_io": false, 00:09:23.904 "nvme_io_md": false, 00:09:23.904 "write_zeroes": true, 00:09:23.904 "zcopy": false, 00:09:23.904 "get_zone_info": false, 00:09:23.904 "zone_management": false, 00:09:23.904 "zone_append": false, 00:09:23.904 "compare": false, 00:09:23.904 "compare_and_write": false, 00:09:23.904 "abort": false, 00:09:23.904 "seek_hole": false, 00:09:23.904 "seek_data": false, 00:09:23.904 "copy": false, 00:09:23.904 "nvme_iov_md": false 00:09:23.904 }, 00:09:23.904 "memory_domains": [ 00:09:23.904 { 00:09:23.904 "dma_device_id": "system", 00:09:23.904 "dma_device_type": 1 00:09:23.904 }, 00:09:23.904 { 00:09:23.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.904 "dma_device_type": 2 00:09:23.904 }, 00:09:23.904 { 00:09:23.904 "dma_device_id": "system", 00:09:23.904 "dma_device_type": 1 00:09:23.904 }, 00:09:23.904 { 00:09:23.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.904 "dma_device_type": 2 00:09:23.904 }, 00:09:23.904 { 00:09:23.904 "dma_device_id": "system", 00:09:23.904 "dma_device_type": 1 00:09:23.904 }, 00:09:23.904 { 00:09:23.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.904 "dma_device_type": 2 00:09:23.904 } 00:09:23.904 ], 00:09:23.904 "driver_specific": { 00:09:23.904 "raid": { 00:09:23.904 "uuid": "fb100476-8222-4048-9660-74c50b5763d2", 00:09:23.904 "strip_size_kb": 64, 00:09:23.904 "state": "online", 00:09:23.904 "raid_level": "raid0", 00:09:23.904 "superblock": true, 00:09:23.904 "num_base_bdevs": 3, 00:09:23.904 "num_base_bdevs_discovered": 3, 00:09:23.904 "num_base_bdevs_operational": 3, 00:09:23.904 "base_bdevs_list": [ 00:09:23.904 { 00:09:23.904 "name": "pt1", 00:09:23.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.904 "is_configured": true, 00:09:23.904 "data_offset": 2048, 00:09:23.904 "data_size": 63488 00:09:23.905 }, 00:09:23.905 { 00:09:23.905 "name": "pt2", 00:09:23.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.905 "is_configured": true, 00:09:23.905 "data_offset": 2048, 00:09:23.905 "data_size": 63488 00:09:23.905 }, 00:09:23.905 { 00:09:23.905 "name": "pt3", 00:09:23.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.905 "is_configured": true, 00:09:23.905 "data_offset": 2048, 00:09:23.905 "data_size": 63488 00:09:23.905 } 00:09:23.905 ] 00:09:23.905 } 00:09:23.905 } 00:09:23.905 }' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:23.905 pt2 00:09:23.905 pt3' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.905 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 [2024-11-26 20:22:17.456390] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fb100476-8222-4048-9660-74c50b5763d2 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fb100476-8222-4048-9660-74c50b5763d2 ']' 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 [2024-11-26 20:22:17.487958] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.165 [2024-11-26 20:22:17.487996] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.165 [2024-11-26 20:22:17.488108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.165 [2024-11-26 20:22:17.488202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.165 [2024-11-26 20:22:17.488218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 [2024-11-26 20:22:17.627773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:24.165 [2024-11-26 20:22:17.630042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:24.165 [2024-11-26 20:22:17.630105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:24.165 [2024-11-26 20:22:17.630168] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:24.165 [2024-11-26 20:22:17.630239] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:24.165 [2024-11-26 20:22:17.630274] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:24.165 [2024-11-26 20:22:17.630291] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.165 [2024-11-26 20:22:17.630312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:24.165 request: 00:09:24.165 { 00:09:24.165 "name": "raid_bdev1", 00:09:24.165 "raid_level": "raid0", 00:09:24.165 "base_bdevs": [ 00:09:24.165 "malloc1", 00:09:24.165 "malloc2", 00:09:24.165 "malloc3" 00:09:24.165 ], 00:09:24.165 "strip_size_kb": 64, 00:09:24.165 "superblock": false, 00:09:24.165 "method": "bdev_raid_create", 00:09:24.165 "req_id": 1 00:09:24.165 } 00:09:24.165 Got JSON-RPC error response 00:09:24.165 response: 00:09:24.165 { 00:09:24.165 "code": -17, 00:09:24.165 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:24.165 } 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.165 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.165 [2024-11-26 20:22:17.691628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.165 [2024-11-26 20:22:17.691703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.165 [2024-11-26 20:22:17.691724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:24.165 [2024-11-26 20:22:17.691739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.165 [2024-11-26 20:22:17.694286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.165 [2024-11-26 20:22:17.694336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.165 [2024-11-26 20:22:17.694425] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:24.165 [2024-11-26 20:22:17.694475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.165 pt1 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.166 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.424 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.424 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.424 "name": "raid_bdev1", 00:09:24.424 "uuid": "fb100476-8222-4048-9660-74c50b5763d2", 00:09:24.424 "strip_size_kb": 64, 00:09:24.424 "state": "configuring", 00:09:24.424 "raid_level": "raid0", 00:09:24.424 "superblock": true, 00:09:24.424 "num_base_bdevs": 3, 00:09:24.424 "num_base_bdevs_discovered": 1, 00:09:24.424 "num_base_bdevs_operational": 3, 00:09:24.424 "base_bdevs_list": [ 00:09:24.424 { 00:09:24.424 "name": "pt1", 00:09:24.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.424 "is_configured": true, 00:09:24.424 "data_offset": 2048, 00:09:24.424 "data_size": 63488 00:09:24.424 }, 00:09:24.424 { 00:09:24.424 "name": null, 00:09:24.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.424 "is_configured": false, 00:09:24.424 "data_offset": 2048, 00:09:24.424 "data_size": 63488 00:09:24.424 }, 00:09:24.424 { 00:09:24.424 "name": null, 00:09:24.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.424 "is_configured": false, 00:09:24.424 "data_offset": 2048, 00:09:24.424 "data_size": 63488 00:09:24.424 } 00:09:24.424 ] 00:09:24.424 }' 00:09:24.424 20:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.424 20:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.682 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:24.682 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.682 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.682 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.683 [2024-11-26 20:22:18.126903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.683 [2024-11-26 20:22:18.126982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.683 [2024-11-26 20:22:18.127028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:24.683 [2024-11-26 20:22:18.127047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.683 [2024-11-26 20:22:18.127510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.683 [2024-11-26 20:22:18.127545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.683 [2024-11-26 20:22:18.127644] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:24.683 [2024-11-26 20:22:18.127680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.683 pt2 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.683 [2024-11-26 20:22:18.134923] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.683 "name": "raid_bdev1", 00:09:24.683 "uuid": "fb100476-8222-4048-9660-74c50b5763d2", 00:09:24.683 "strip_size_kb": 64, 00:09:24.683 "state": "configuring", 00:09:24.683 "raid_level": "raid0", 00:09:24.683 "superblock": true, 00:09:24.683 "num_base_bdevs": 3, 00:09:24.683 "num_base_bdevs_discovered": 1, 00:09:24.683 "num_base_bdevs_operational": 3, 00:09:24.683 "base_bdevs_list": [ 00:09:24.683 { 00:09:24.683 "name": "pt1", 00:09:24.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.683 "is_configured": true, 00:09:24.683 "data_offset": 2048, 00:09:24.683 "data_size": 63488 00:09:24.683 }, 00:09:24.683 { 00:09:24.683 "name": null, 00:09:24.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.683 "is_configured": false, 00:09:24.683 "data_offset": 0, 00:09:24.683 "data_size": 63488 00:09:24.683 }, 00:09:24.683 { 00:09:24.683 "name": null, 00:09:24.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.683 "is_configured": false, 00:09:24.683 "data_offset": 2048, 00:09:24.683 "data_size": 63488 00:09:24.683 } 00:09:24.683 ] 00:09:24.683 }' 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.683 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.250 [2024-11-26 20:22:18.598130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:25.250 [2024-11-26 20:22:18.598204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.250 [2024-11-26 20:22:18.598227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:25.250 [2024-11-26 20:22:18.598241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.250 [2024-11-26 20:22:18.598701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.250 [2024-11-26 20:22:18.598731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:25.250 [2024-11-26 20:22:18.598819] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:25.250 [2024-11-26 20:22:18.598848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:25.250 pt2 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.250 [2024-11-26 20:22:18.606080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:25.250 [2024-11-26 20:22:18.606138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.250 [2024-11-26 20:22:18.606159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:25.250 [2024-11-26 20:22:18.606171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.250 [2024-11-26 20:22:18.606586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.250 [2024-11-26 20:22:18.606629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:25.250 [2024-11-26 20:22:18.606707] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:25.250 [2024-11-26 20:22:18.606733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:25.250 [2024-11-26 20:22:18.606844] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:25.250 [2024-11-26 20:22:18.606858] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:25.250 [2024-11-26 20:22:18.607111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:25.250 [2024-11-26 20:22:18.607233] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:25.250 [2024-11-26 20:22:18.607251] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:25.250 [2024-11-26 20:22:18.607355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.250 pt3 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.250 "name": "raid_bdev1", 00:09:25.250 "uuid": "fb100476-8222-4048-9660-74c50b5763d2", 00:09:25.250 "strip_size_kb": 64, 00:09:25.250 "state": "online", 00:09:25.250 "raid_level": "raid0", 00:09:25.250 "superblock": true, 00:09:25.250 "num_base_bdevs": 3, 00:09:25.250 "num_base_bdevs_discovered": 3, 00:09:25.250 "num_base_bdevs_operational": 3, 00:09:25.250 "base_bdevs_list": [ 00:09:25.250 { 00:09:25.250 "name": "pt1", 00:09:25.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.250 "is_configured": true, 00:09:25.250 "data_offset": 2048, 00:09:25.250 "data_size": 63488 00:09:25.250 }, 00:09:25.250 { 00:09:25.250 "name": "pt2", 00:09:25.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.250 "is_configured": true, 00:09:25.250 "data_offset": 2048, 00:09:25.250 "data_size": 63488 00:09:25.250 }, 00:09:25.250 { 00:09:25.250 "name": "pt3", 00:09:25.250 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.250 "is_configured": true, 00:09:25.250 "data_offset": 2048, 00:09:25.250 "data_size": 63488 00:09:25.250 } 00:09:25.250 ] 00:09:25.250 }' 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.250 20:22:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.509 [2024-11-26 20:22:19.013808] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.509 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.509 "name": "raid_bdev1", 00:09:25.510 "aliases": [ 00:09:25.510 "fb100476-8222-4048-9660-74c50b5763d2" 00:09:25.510 ], 00:09:25.510 "product_name": "Raid Volume", 00:09:25.510 "block_size": 512, 00:09:25.510 "num_blocks": 190464, 00:09:25.510 "uuid": "fb100476-8222-4048-9660-74c50b5763d2", 00:09:25.510 "assigned_rate_limits": { 00:09:25.510 "rw_ios_per_sec": 0, 00:09:25.510 "rw_mbytes_per_sec": 0, 00:09:25.510 "r_mbytes_per_sec": 0, 00:09:25.510 "w_mbytes_per_sec": 0 00:09:25.510 }, 00:09:25.510 "claimed": false, 00:09:25.510 "zoned": false, 00:09:25.510 "supported_io_types": { 00:09:25.510 "read": true, 00:09:25.510 "write": true, 00:09:25.510 "unmap": true, 00:09:25.510 "flush": true, 00:09:25.510 "reset": true, 00:09:25.510 "nvme_admin": false, 00:09:25.510 "nvme_io": false, 00:09:25.510 "nvme_io_md": false, 00:09:25.510 "write_zeroes": true, 00:09:25.510 "zcopy": false, 00:09:25.510 "get_zone_info": false, 00:09:25.510 "zone_management": false, 00:09:25.510 "zone_append": false, 00:09:25.510 "compare": false, 00:09:25.510 "compare_and_write": false, 00:09:25.510 "abort": false, 00:09:25.510 "seek_hole": false, 00:09:25.510 "seek_data": false, 00:09:25.510 "copy": false, 00:09:25.510 "nvme_iov_md": false 00:09:25.510 }, 00:09:25.510 "memory_domains": [ 00:09:25.510 { 00:09:25.510 "dma_device_id": "system", 00:09:25.510 "dma_device_type": 1 00:09:25.510 }, 00:09:25.510 { 00:09:25.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.510 "dma_device_type": 2 00:09:25.510 }, 00:09:25.510 { 00:09:25.510 "dma_device_id": "system", 00:09:25.510 "dma_device_type": 1 00:09:25.510 }, 00:09:25.510 { 00:09:25.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.510 "dma_device_type": 2 00:09:25.510 }, 00:09:25.510 { 00:09:25.510 "dma_device_id": "system", 00:09:25.510 "dma_device_type": 1 00:09:25.510 }, 00:09:25.510 { 00:09:25.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.510 "dma_device_type": 2 00:09:25.510 } 00:09:25.510 ], 00:09:25.510 "driver_specific": { 00:09:25.510 "raid": { 00:09:25.510 "uuid": "fb100476-8222-4048-9660-74c50b5763d2", 00:09:25.510 "strip_size_kb": 64, 00:09:25.510 "state": "online", 00:09:25.510 "raid_level": "raid0", 00:09:25.510 "superblock": true, 00:09:25.510 "num_base_bdevs": 3, 00:09:25.510 "num_base_bdevs_discovered": 3, 00:09:25.510 "num_base_bdevs_operational": 3, 00:09:25.510 "base_bdevs_list": [ 00:09:25.510 { 00:09:25.510 "name": "pt1", 00:09:25.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.510 "is_configured": true, 00:09:25.510 "data_offset": 2048, 00:09:25.510 "data_size": 63488 00:09:25.510 }, 00:09:25.510 { 00:09:25.510 "name": "pt2", 00:09:25.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.510 "is_configured": true, 00:09:25.510 "data_offset": 2048, 00:09:25.510 "data_size": 63488 00:09:25.510 }, 00:09:25.510 { 00:09:25.510 "name": "pt3", 00:09:25.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.510 "is_configured": true, 00:09:25.510 "data_offset": 2048, 00:09:25.510 "data_size": 63488 00:09:25.510 } 00:09:25.510 ] 00:09:25.510 } 00:09:25.510 } 00:09:25.510 }' 00:09:25.510 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.767 pt2 00:09:25.767 pt3' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:25.767 [2024-11-26 20:22:19.257367] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fb100476-8222-4048-9660-74c50b5763d2 '!=' fb100476-8222-4048-9660-74c50b5763d2 ']' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76653 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76653 ']' 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76653 00:09:25.767 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:25.768 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.768 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76653 00:09:26.025 killing process with pid 76653 00:09:26.025 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.025 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.025 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76653' 00:09:26.025 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76653 00:09:26.025 [2024-11-26 20:22:19.332721] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.025 [2024-11-26 20:22:19.332858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.025 [2024-11-26 20:22:19.332933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.025 [2024-11-26 20:22:19.332944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:26.025 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76653 00:09:26.025 [2024-11-26 20:22:19.381944] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.284 20:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:26.284 00:09:26.284 real 0m4.194s 00:09:26.284 user 0m6.486s 00:09:26.284 sys 0m0.937s 00:09:26.284 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.284 20:22:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.284 ************************************ 00:09:26.284 END TEST raid_superblock_test 00:09:26.284 ************************************ 00:09:26.284 20:22:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:26.284 20:22:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:26.284 20:22:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.284 20:22:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.542 ************************************ 00:09:26.542 START TEST raid_read_error_test 00:09:26.542 ************************************ 00:09:26.542 20:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yQ59p2czFP 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76895 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76895 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76895 ']' 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.543 20:22:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.543 [2024-11-26 20:22:19.942686] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:26.543 [2024-11-26 20:22:19.942829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76895 ] 00:09:26.543 [2024-11-26 20:22:20.091317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.802 [2024-11-26 20:22:20.168817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.802 [2024-11-26 20:22:20.241488] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.802 [2024-11-26 20:22:20.241533] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 BaseBdev1_malloc 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 true 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 [2024-11-26 20:22:20.846089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:27.369 [2024-11-26 20:22:20.846162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.369 [2024-11-26 20:22:20.846195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:27.369 [2024-11-26 20:22:20.846214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.369 [2024-11-26 20:22:20.848456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.369 [2024-11-26 20:22:20.848501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:27.369 BaseBdev1 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 BaseBdev2_malloc 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 true 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 [2024-11-26 20:22:20.900168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.369 [2024-11-26 20:22:20.900241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.369 [2024-11-26 20:22:20.900275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:27.369 [2024-11-26 20:22:20.900344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.369 [2024-11-26 20:22:20.902792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.369 [2024-11-26 20:22:20.902834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.369 BaseBdev2 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.369 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.628 BaseBdev3_malloc 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.628 true 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.628 [2024-11-26 20:22:20.933063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:27.628 [2024-11-26 20:22:20.933125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.628 [2024-11-26 20:22:20.933154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:27.628 [2024-11-26 20:22:20.933174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.628 [2024-11-26 20:22:20.935385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.628 [2024-11-26 20:22:20.935426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:27.628 BaseBdev3 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.628 [2024-11-26 20:22:20.941107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.628 [2024-11-26 20:22:20.943044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.628 [2024-11-26 20:22:20.943149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.628 [2024-11-26 20:22:20.943353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:27.628 [2024-11-26 20:22:20.943378] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:27.628 [2024-11-26 20:22:20.943676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:27.628 [2024-11-26 20:22:20.943857] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:27.628 [2024-11-26 20:22:20.943880] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:27.628 [2024-11-26 20:22:20.944048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.628 "name": "raid_bdev1", 00:09:27.628 "uuid": "bf861a45-606c-41e2-b748-632b3763743f", 00:09:27.628 "strip_size_kb": 64, 00:09:27.628 "state": "online", 00:09:27.628 "raid_level": "raid0", 00:09:27.628 "superblock": true, 00:09:27.628 "num_base_bdevs": 3, 00:09:27.628 "num_base_bdevs_discovered": 3, 00:09:27.628 "num_base_bdevs_operational": 3, 00:09:27.628 "base_bdevs_list": [ 00:09:27.628 { 00:09:27.628 "name": "BaseBdev1", 00:09:27.628 "uuid": "b477b221-43a4-563a-b175-86c5d27cac03", 00:09:27.628 "is_configured": true, 00:09:27.628 "data_offset": 2048, 00:09:27.628 "data_size": 63488 00:09:27.628 }, 00:09:27.628 { 00:09:27.628 "name": "BaseBdev2", 00:09:27.628 "uuid": "7db1952e-1a83-5e9b-ae68-b3000754cf39", 00:09:27.628 "is_configured": true, 00:09:27.628 "data_offset": 2048, 00:09:27.628 "data_size": 63488 00:09:27.628 }, 00:09:27.628 { 00:09:27.628 "name": "BaseBdev3", 00:09:27.628 "uuid": "8405d531-9519-5809-9dbe-1bfbae9f8768", 00:09:27.628 "is_configured": true, 00:09:27.628 "data_offset": 2048, 00:09:27.628 "data_size": 63488 00:09:27.628 } 00:09:27.628 ] 00:09:27.628 }' 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.628 20:22:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.887 20:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:27.887 20:22:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.145 [2024-11-26 20:22:21.492738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.080 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.081 "name": "raid_bdev1", 00:09:29.081 "uuid": "bf861a45-606c-41e2-b748-632b3763743f", 00:09:29.081 "strip_size_kb": 64, 00:09:29.081 "state": "online", 00:09:29.081 "raid_level": "raid0", 00:09:29.081 "superblock": true, 00:09:29.081 "num_base_bdevs": 3, 00:09:29.081 "num_base_bdevs_discovered": 3, 00:09:29.081 "num_base_bdevs_operational": 3, 00:09:29.081 "base_bdevs_list": [ 00:09:29.081 { 00:09:29.081 "name": "BaseBdev1", 00:09:29.081 "uuid": "b477b221-43a4-563a-b175-86c5d27cac03", 00:09:29.081 "is_configured": true, 00:09:29.081 "data_offset": 2048, 00:09:29.081 "data_size": 63488 00:09:29.081 }, 00:09:29.081 { 00:09:29.081 "name": "BaseBdev2", 00:09:29.081 "uuid": "7db1952e-1a83-5e9b-ae68-b3000754cf39", 00:09:29.081 "is_configured": true, 00:09:29.081 "data_offset": 2048, 00:09:29.081 "data_size": 63488 00:09:29.081 }, 00:09:29.081 { 00:09:29.081 "name": "BaseBdev3", 00:09:29.081 "uuid": "8405d531-9519-5809-9dbe-1bfbae9f8768", 00:09:29.081 "is_configured": true, 00:09:29.081 "data_offset": 2048, 00:09:29.081 "data_size": 63488 00:09:29.081 } 00:09:29.081 ] 00:09:29.081 }' 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.081 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.339 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.339 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.339 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.339 [2024-11-26 20:22:22.877599] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.339 [2024-11-26 20:22:22.877651] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.339 [2024-11-26 20:22:22.880233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.339 [2024-11-26 20:22:22.880295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.339 [2024-11-26 20:22:22.880344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.339 [2024-11-26 20:22:22.880368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:29.339 { 00:09:29.339 "results": [ 00:09:29.339 { 00:09:29.339 "job": "raid_bdev1", 00:09:29.339 "core_mask": "0x1", 00:09:29.339 "workload": "randrw", 00:09:29.339 "percentage": 50, 00:09:29.339 "status": "finished", 00:09:29.339 "queue_depth": 1, 00:09:29.339 "io_size": 131072, 00:09:29.339 "runtime": 1.385633, 00:09:29.339 "iops": 14446.105137507551, 00:09:29.339 "mibps": 1805.763142188444, 00:09:29.339 "io_failed": 1, 00:09:29.339 "io_timeout": 0, 00:09:29.339 "avg_latency_us": 96.50226097822004, 00:09:29.339 "min_latency_us": 19.451528384279477, 00:09:29.339 "max_latency_us": 1631.2454148471616 00:09:29.339 } 00:09:29.339 ], 00:09:29.339 "core_count": 1 00:09:29.339 } 00:09:29.339 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.339 20:22:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76895 00:09:29.339 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76895 ']' 00:09:29.339 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76895 00:09:29.339 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:29.598 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.598 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76895 00:09:29.598 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.598 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.598 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76895' 00:09:29.598 killing process with pid 76895 00:09:29.598 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76895 00:09:29.598 [2024-11-26 20:22:22.915419] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.598 20:22:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76895 00:09:29.598 [2024-11-26 20:22:22.955019] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yQ59p2czFP 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:29.857 00:09:29.857 real 0m3.495s 00:09:29.857 user 0m4.348s 00:09:29.857 sys 0m0.629s 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.857 20:22:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.857 ************************************ 00:09:29.857 END TEST raid_read_error_test 00:09:29.857 ************************************ 00:09:29.857 20:22:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:29.857 20:22:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:29.857 20:22:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.857 20:22:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.857 ************************************ 00:09:29.857 START TEST raid_write_error_test 00:09:29.857 ************************************ 00:09:29.857 20:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:09:29.857 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:29.857 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:29.857 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rbFiqt1jyb 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77029 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77029 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 77029 ']' 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.115 20:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.115 [2024-11-26 20:22:23.513776] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:30.115 [2024-11-26 20:22:23.513930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77029 ] 00:09:30.374 [2024-11-26 20:22:23.674323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.374 [2024-11-26 20:22:23.764538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.374 [2024-11-26 20:22:23.838030] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.374 [2024-11-26 20:22:23.838076] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.944 BaseBdev1_malloc 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.944 true 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.944 [2024-11-26 20:22:24.430424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:30.944 [2024-11-26 20:22:24.430497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.944 [2024-11-26 20:22:24.430541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:30.944 [2024-11-26 20:22:24.430557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.944 [2024-11-26 20:22:24.432849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.944 [2024-11-26 20:22:24.432893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:30.944 BaseBdev1 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.944 BaseBdev2_malloc 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.944 true 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.944 [2024-11-26 20:22:24.482968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:30.944 [2024-11-26 20:22:24.483058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.944 [2024-11-26 20:22:24.483103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:30.944 [2024-11-26 20:22:24.483114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.944 [2024-11-26 20:22:24.485683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.944 [2024-11-26 20:22:24.485733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:30.944 BaseBdev2 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.944 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.205 BaseBdev3_malloc 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.205 true 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.205 [2024-11-26 20:22:24.529093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:31.205 [2024-11-26 20:22:24.529159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.205 [2024-11-26 20:22:24.529186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:31.205 [2024-11-26 20:22:24.529197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.205 [2024-11-26 20:22:24.531685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.205 [2024-11-26 20:22:24.531726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:31.205 BaseBdev3 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.205 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.205 [2024-11-26 20:22:24.541175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.205 [2024-11-26 20:22:24.543301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.205 [2024-11-26 20:22:24.543401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.205 [2024-11-26 20:22:24.543639] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:31.205 [2024-11-26 20:22:24.543671] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.205 [2024-11-26 20:22:24.544016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:31.205 [2024-11-26 20:22:24.544195] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:31.205 [2024-11-26 20:22:24.544215] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:31.206 [2024-11-26 20:22:24.544385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.206 "name": "raid_bdev1", 00:09:31.206 "uuid": "30f52b7c-8797-4c69-98df-759fc30985ef", 00:09:31.206 "strip_size_kb": 64, 00:09:31.206 "state": "online", 00:09:31.206 "raid_level": "raid0", 00:09:31.206 "superblock": true, 00:09:31.206 "num_base_bdevs": 3, 00:09:31.206 "num_base_bdevs_discovered": 3, 00:09:31.206 "num_base_bdevs_operational": 3, 00:09:31.206 "base_bdevs_list": [ 00:09:31.206 { 00:09:31.206 "name": "BaseBdev1", 00:09:31.206 "uuid": "44859542-827a-5757-9e27-970bbc8b40ff", 00:09:31.206 "is_configured": true, 00:09:31.206 "data_offset": 2048, 00:09:31.206 "data_size": 63488 00:09:31.206 }, 00:09:31.206 { 00:09:31.206 "name": "BaseBdev2", 00:09:31.206 "uuid": "7c33c8c7-dcb0-5ec7-9769-89cb212eb57e", 00:09:31.206 "is_configured": true, 00:09:31.206 "data_offset": 2048, 00:09:31.206 "data_size": 63488 00:09:31.206 }, 00:09:31.206 { 00:09:31.206 "name": "BaseBdev3", 00:09:31.206 "uuid": "c6a77d4a-9bbf-5baf-bba8-4db782b3bc97", 00:09:31.206 "is_configured": true, 00:09:31.206 "data_offset": 2048, 00:09:31.206 "data_size": 63488 00:09:31.206 } 00:09:31.206 ] 00:09:31.206 }' 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.206 20:22:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.464 20:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:31.464 20:22:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:31.723 [2024-11-26 20:22:25.084745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.662 "name": "raid_bdev1", 00:09:32.662 "uuid": "30f52b7c-8797-4c69-98df-759fc30985ef", 00:09:32.662 "strip_size_kb": 64, 00:09:32.662 "state": "online", 00:09:32.662 "raid_level": "raid0", 00:09:32.662 "superblock": true, 00:09:32.662 "num_base_bdevs": 3, 00:09:32.662 "num_base_bdevs_discovered": 3, 00:09:32.662 "num_base_bdevs_operational": 3, 00:09:32.662 "base_bdevs_list": [ 00:09:32.662 { 00:09:32.662 "name": "BaseBdev1", 00:09:32.662 "uuid": "44859542-827a-5757-9e27-970bbc8b40ff", 00:09:32.662 "is_configured": true, 00:09:32.662 "data_offset": 2048, 00:09:32.662 "data_size": 63488 00:09:32.662 }, 00:09:32.662 { 00:09:32.662 "name": "BaseBdev2", 00:09:32.662 "uuid": "7c33c8c7-dcb0-5ec7-9769-89cb212eb57e", 00:09:32.662 "is_configured": true, 00:09:32.662 "data_offset": 2048, 00:09:32.662 "data_size": 63488 00:09:32.662 }, 00:09:32.662 { 00:09:32.662 "name": "BaseBdev3", 00:09:32.662 "uuid": "c6a77d4a-9bbf-5baf-bba8-4db782b3bc97", 00:09:32.662 "is_configured": true, 00:09:32.662 "data_offset": 2048, 00:09:32.662 "data_size": 63488 00:09:32.662 } 00:09:32.662 ] 00:09:32.662 }' 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.662 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.922 [2024-11-26 20:22:26.418073] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.922 [2024-11-26 20:22:26.418111] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.922 [2024-11-26 20:22:26.420978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.922 [2024-11-26 20:22:26.421035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.922 [2024-11-26 20:22:26.421075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.922 [2024-11-26 20:22:26.421089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:32.922 { 00:09:32.922 "results": [ 00:09:32.922 { 00:09:32.922 "job": "raid_bdev1", 00:09:32.922 "core_mask": "0x1", 00:09:32.922 "workload": "randrw", 00:09:32.922 "percentage": 50, 00:09:32.922 "status": "finished", 00:09:32.922 "queue_depth": 1, 00:09:32.922 "io_size": 131072, 00:09:32.922 "runtime": 1.334045, 00:09:32.922 "iops": 13840.61257303914, 00:09:32.922 "mibps": 1730.0765716298924, 00:09:32.922 "io_failed": 1, 00:09:32.922 "io_timeout": 0, 00:09:32.922 "avg_latency_us": 101.14296198283782, 00:09:32.922 "min_latency_us": 25.6, 00:09:32.922 "max_latency_us": 1581.1633187772925 00:09:32.922 } 00:09:32.922 ], 00:09:32.922 "core_count": 1 00:09:32.922 } 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77029 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 77029 ']' 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 77029 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77029 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.922 killing process with pid 77029 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77029' 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 77029 00:09:32.922 [2024-11-26 20:22:26.465083] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.922 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 77029 00:09:33.181 [2024-11-26 20:22:26.509865] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.440 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rbFiqt1jyb 00:09:33.440 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.440 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.440 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:33.440 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:33.440 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.440 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.440 20:22:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:33.440 00:09:33.440 real 0m3.483s 00:09:33.440 user 0m4.249s 00:09:33.440 sys 0m0.634s 00:09:33.441 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.441 20:22:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.441 ************************************ 00:09:33.441 END TEST raid_write_error_test 00:09:33.441 ************************************ 00:09:33.441 20:22:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:33.441 20:22:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:33.441 20:22:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:33.441 20:22:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.441 20:22:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.441 ************************************ 00:09:33.441 START TEST raid_state_function_test 00:09:33.441 ************************************ 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77162 00:09:33.441 Process raid pid: 77162 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77162' 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77162 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 77162 ']' 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.441 20:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.700 [2024-11-26 20:22:27.055197] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:33.700 [2024-11-26 20:22:27.055369] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.700 [2024-11-26 20:22:27.201537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.959 [2024-11-26 20:22:27.282147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.959 [2024-11-26 20:22:27.359676] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.959 [2024-11-26 20:22:27.359711] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.568 [2024-11-26 20:22:27.914080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.568 [2024-11-26 20:22:27.914141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.568 [2024-11-26 20:22:27.914157] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.568 [2024-11-26 20:22:27.914168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.568 [2024-11-26 20:22:27.914175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.568 [2024-11-26 20:22:27.914188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.568 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.568 "name": "Existed_Raid", 00:09:34.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.568 "strip_size_kb": 64, 00:09:34.568 "state": "configuring", 00:09:34.568 "raid_level": "concat", 00:09:34.568 "superblock": false, 00:09:34.569 "num_base_bdevs": 3, 00:09:34.569 "num_base_bdevs_discovered": 0, 00:09:34.569 "num_base_bdevs_operational": 3, 00:09:34.569 "base_bdevs_list": [ 00:09:34.569 { 00:09:34.569 "name": "BaseBdev1", 00:09:34.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.569 "is_configured": false, 00:09:34.569 "data_offset": 0, 00:09:34.569 "data_size": 0 00:09:34.569 }, 00:09:34.569 { 00:09:34.569 "name": "BaseBdev2", 00:09:34.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.569 "is_configured": false, 00:09:34.569 "data_offset": 0, 00:09:34.569 "data_size": 0 00:09:34.569 }, 00:09:34.569 { 00:09:34.569 "name": "BaseBdev3", 00:09:34.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.569 "is_configured": false, 00:09:34.569 "data_offset": 0, 00:09:34.569 "data_size": 0 00:09:34.569 } 00:09:34.569 ] 00:09:34.569 }' 00:09:34.569 20:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.569 20:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.138 [2024-11-26 20:22:28.389180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.138 [2024-11-26 20:22:28.389226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.138 [2024-11-26 20:22:28.401213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.138 [2024-11-26 20:22:28.401266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.138 [2024-11-26 20:22:28.401275] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.138 [2024-11-26 20:22:28.401284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.138 [2024-11-26 20:22:28.401291] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.138 [2024-11-26 20:22:28.401299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.138 [2024-11-26 20:22:28.424271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.138 BaseBdev1 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.138 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.138 [ 00:09:35.138 { 00:09:35.138 "name": "BaseBdev1", 00:09:35.138 "aliases": [ 00:09:35.138 "7a4a28c8-97bd-4a46-9a36-a133cc18f31a" 00:09:35.138 ], 00:09:35.138 "product_name": "Malloc disk", 00:09:35.138 "block_size": 512, 00:09:35.138 "num_blocks": 65536, 00:09:35.138 "uuid": "7a4a28c8-97bd-4a46-9a36-a133cc18f31a", 00:09:35.138 "assigned_rate_limits": { 00:09:35.139 "rw_ios_per_sec": 0, 00:09:35.139 "rw_mbytes_per_sec": 0, 00:09:35.139 "r_mbytes_per_sec": 0, 00:09:35.139 "w_mbytes_per_sec": 0 00:09:35.139 }, 00:09:35.139 "claimed": true, 00:09:35.139 "claim_type": "exclusive_write", 00:09:35.139 "zoned": false, 00:09:35.139 "supported_io_types": { 00:09:35.139 "read": true, 00:09:35.139 "write": true, 00:09:35.139 "unmap": true, 00:09:35.139 "flush": true, 00:09:35.139 "reset": true, 00:09:35.139 "nvme_admin": false, 00:09:35.139 "nvme_io": false, 00:09:35.139 "nvme_io_md": false, 00:09:35.139 "write_zeroes": true, 00:09:35.139 "zcopy": true, 00:09:35.139 "get_zone_info": false, 00:09:35.139 "zone_management": false, 00:09:35.139 "zone_append": false, 00:09:35.139 "compare": false, 00:09:35.139 "compare_and_write": false, 00:09:35.139 "abort": true, 00:09:35.139 "seek_hole": false, 00:09:35.139 "seek_data": false, 00:09:35.139 "copy": true, 00:09:35.139 "nvme_iov_md": false 00:09:35.139 }, 00:09:35.139 "memory_domains": [ 00:09:35.139 { 00:09:35.139 "dma_device_id": "system", 00:09:35.139 "dma_device_type": 1 00:09:35.139 }, 00:09:35.139 { 00:09:35.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.139 "dma_device_type": 2 00:09:35.139 } 00:09:35.139 ], 00:09:35.139 "driver_specific": {} 00:09:35.139 } 00:09:35.139 ] 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.139 "name": "Existed_Raid", 00:09:35.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.139 "strip_size_kb": 64, 00:09:35.139 "state": "configuring", 00:09:35.139 "raid_level": "concat", 00:09:35.139 "superblock": false, 00:09:35.139 "num_base_bdevs": 3, 00:09:35.139 "num_base_bdevs_discovered": 1, 00:09:35.139 "num_base_bdevs_operational": 3, 00:09:35.139 "base_bdevs_list": [ 00:09:35.139 { 00:09:35.139 "name": "BaseBdev1", 00:09:35.139 "uuid": "7a4a28c8-97bd-4a46-9a36-a133cc18f31a", 00:09:35.139 "is_configured": true, 00:09:35.139 "data_offset": 0, 00:09:35.139 "data_size": 65536 00:09:35.139 }, 00:09:35.139 { 00:09:35.139 "name": "BaseBdev2", 00:09:35.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.139 "is_configured": false, 00:09:35.139 "data_offset": 0, 00:09:35.139 "data_size": 0 00:09:35.139 }, 00:09:35.139 { 00:09:35.139 "name": "BaseBdev3", 00:09:35.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.139 "is_configured": false, 00:09:35.139 "data_offset": 0, 00:09:35.139 "data_size": 0 00:09:35.139 } 00:09:35.139 ] 00:09:35.139 }' 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.139 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.399 [2024-11-26 20:22:28.899521] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.399 [2024-11-26 20:22:28.899580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.399 [2024-11-26 20:22:28.907540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.399 [2024-11-26 20:22:28.909450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.399 [2024-11-26 20:22:28.909495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.399 [2024-11-26 20:22:28.909505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.399 [2024-11-26 20:22:28.909532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.399 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.659 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.659 "name": "Existed_Raid", 00:09:35.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.659 "strip_size_kb": 64, 00:09:35.659 "state": "configuring", 00:09:35.659 "raid_level": "concat", 00:09:35.659 "superblock": false, 00:09:35.659 "num_base_bdevs": 3, 00:09:35.659 "num_base_bdevs_discovered": 1, 00:09:35.659 "num_base_bdevs_operational": 3, 00:09:35.659 "base_bdevs_list": [ 00:09:35.659 { 00:09:35.659 "name": "BaseBdev1", 00:09:35.659 "uuid": "7a4a28c8-97bd-4a46-9a36-a133cc18f31a", 00:09:35.659 "is_configured": true, 00:09:35.659 "data_offset": 0, 00:09:35.659 "data_size": 65536 00:09:35.659 }, 00:09:35.659 { 00:09:35.659 "name": "BaseBdev2", 00:09:35.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.659 "is_configured": false, 00:09:35.659 "data_offset": 0, 00:09:35.659 "data_size": 0 00:09:35.659 }, 00:09:35.659 { 00:09:35.659 "name": "BaseBdev3", 00:09:35.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.659 "is_configured": false, 00:09:35.659 "data_offset": 0, 00:09:35.659 "data_size": 0 00:09:35.659 } 00:09:35.659 ] 00:09:35.659 }' 00:09:35.659 20:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.659 20:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.918 [2024-11-26 20:22:29.328594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.918 BaseBdev2 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.918 [ 00:09:35.918 { 00:09:35.918 "name": "BaseBdev2", 00:09:35.918 "aliases": [ 00:09:35.918 "5ba6f734-0139-4a5b-b0fb-c8027302c314" 00:09:35.918 ], 00:09:35.918 "product_name": "Malloc disk", 00:09:35.918 "block_size": 512, 00:09:35.918 "num_blocks": 65536, 00:09:35.918 "uuid": "5ba6f734-0139-4a5b-b0fb-c8027302c314", 00:09:35.918 "assigned_rate_limits": { 00:09:35.918 "rw_ios_per_sec": 0, 00:09:35.918 "rw_mbytes_per_sec": 0, 00:09:35.918 "r_mbytes_per_sec": 0, 00:09:35.918 "w_mbytes_per_sec": 0 00:09:35.918 }, 00:09:35.918 "claimed": true, 00:09:35.918 "claim_type": "exclusive_write", 00:09:35.918 "zoned": false, 00:09:35.918 "supported_io_types": { 00:09:35.918 "read": true, 00:09:35.918 "write": true, 00:09:35.918 "unmap": true, 00:09:35.918 "flush": true, 00:09:35.918 "reset": true, 00:09:35.918 "nvme_admin": false, 00:09:35.918 "nvme_io": false, 00:09:35.918 "nvme_io_md": false, 00:09:35.918 "write_zeroes": true, 00:09:35.918 "zcopy": true, 00:09:35.918 "get_zone_info": false, 00:09:35.918 "zone_management": false, 00:09:35.918 "zone_append": false, 00:09:35.918 "compare": false, 00:09:35.918 "compare_and_write": false, 00:09:35.918 "abort": true, 00:09:35.918 "seek_hole": false, 00:09:35.918 "seek_data": false, 00:09:35.918 "copy": true, 00:09:35.918 "nvme_iov_md": false 00:09:35.918 }, 00:09:35.918 "memory_domains": [ 00:09:35.918 { 00:09:35.918 "dma_device_id": "system", 00:09:35.918 "dma_device_type": 1 00:09:35.918 }, 00:09:35.918 { 00:09:35.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.918 "dma_device_type": 2 00:09:35.918 } 00:09:35.918 ], 00:09:35.918 "driver_specific": {} 00:09:35.918 } 00:09:35.918 ] 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.918 "name": "Existed_Raid", 00:09:35.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.918 "strip_size_kb": 64, 00:09:35.918 "state": "configuring", 00:09:35.918 "raid_level": "concat", 00:09:35.918 "superblock": false, 00:09:35.918 "num_base_bdevs": 3, 00:09:35.918 "num_base_bdevs_discovered": 2, 00:09:35.918 "num_base_bdevs_operational": 3, 00:09:35.918 "base_bdevs_list": [ 00:09:35.918 { 00:09:35.918 "name": "BaseBdev1", 00:09:35.918 "uuid": "7a4a28c8-97bd-4a46-9a36-a133cc18f31a", 00:09:35.918 "is_configured": true, 00:09:35.918 "data_offset": 0, 00:09:35.918 "data_size": 65536 00:09:35.918 }, 00:09:35.918 { 00:09:35.918 "name": "BaseBdev2", 00:09:35.918 "uuid": "5ba6f734-0139-4a5b-b0fb-c8027302c314", 00:09:35.918 "is_configured": true, 00:09:35.918 "data_offset": 0, 00:09:35.918 "data_size": 65536 00:09:35.918 }, 00:09:35.918 { 00:09:35.918 "name": "BaseBdev3", 00:09:35.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.918 "is_configured": false, 00:09:35.918 "data_offset": 0, 00:09:35.918 "data_size": 0 00:09:35.918 } 00:09:35.918 ] 00:09:35.918 }' 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.918 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.487 [2024-11-26 20:22:29.793665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.487 [2024-11-26 20:22:29.793730] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:36.487 [2024-11-26 20:22:29.793763] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:36.487 [2024-11-26 20:22:29.794253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:36.487 [2024-11-26 20:22:29.794480] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:36.487 [2024-11-26 20:22:29.794500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:36.487 BaseBdev3 00:09:36.487 [2024-11-26 20:22:29.794856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.487 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.487 [ 00:09:36.487 { 00:09:36.487 "name": "BaseBdev3", 00:09:36.487 "aliases": [ 00:09:36.487 "6df93d73-5c5a-4ad7-9779-c161a80d5fad" 00:09:36.487 ], 00:09:36.487 "product_name": "Malloc disk", 00:09:36.487 "block_size": 512, 00:09:36.487 "num_blocks": 65536, 00:09:36.487 "uuid": "6df93d73-5c5a-4ad7-9779-c161a80d5fad", 00:09:36.487 "assigned_rate_limits": { 00:09:36.487 "rw_ios_per_sec": 0, 00:09:36.487 "rw_mbytes_per_sec": 0, 00:09:36.487 "r_mbytes_per_sec": 0, 00:09:36.487 "w_mbytes_per_sec": 0 00:09:36.487 }, 00:09:36.487 "claimed": true, 00:09:36.487 "claim_type": "exclusive_write", 00:09:36.487 "zoned": false, 00:09:36.487 "supported_io_types": { 00:09:36.487 "read": true, 00:09:36.487 "write": true, 00:09:36.487 "unmap": true, 00:09:36.487 "flush": true, 00:09:36.487 "reset": true, 00:09:36.487 "nvme_admin": false, 00:09:36.487 "nvme_io": false, 00:09:36.487 "nvme_io_md": false, 00:09:36.487 "write_zeroes": true, 00:09:36.487 "zcopy": true, 00:09:36.487 "get_zone_info": false, 00:09:36.487 "zone_management": false, 00:09:36.487 "zone_append": false, 00:09:36.487 "compare": false, 00:09:36.487 "compare_and_write": false, 00:09:36.487 "abort": true, 00:09:36.487 "seek_hole": false, 00:09:36.487 "seek_data": false, 00:09:36.487 "copy": true, 00:09:36.487 "nvme_iov_md": false 00:09:36.487 }, 00:09:36.487 "memory_domains": [ 00:09:36.487 { 00:09:36.487 "dma_device_id": "system", 00:09:36.487 "dma_device_type": 1 00:09:36.488 }, 00:09:36.488 { 00:09:36.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.488 "dma_device_type": 2 00:09:36.488 } 00:09:36.488 ], 00:09:36.488 "driver_specific": {} 00:09:36.488 } 00:09:36.488 ] 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.488 "name": "Existed_Raid", 00:09:36.488 "uuid": "1b11d1e2-ef22-422c-b248-e3e54f0d6385", 00:09:36.488 "strip_size_kb": 64, 00:09:36.488 "state": "online", 00:09:36.488 "raid_level": "concat", 00:09:36.488 "superblock": false, 00:09:36.488 "num_base_bdevs": 3, 00:09:36.488 "num_base_bdevs_discovered": 3, 00:09:36.488 "num_base_bdevs_operational": 3, 00:09:36.488 "base_bdevs_list": [ 00:09:36.488 { 00:09:36.488 "name": "BaseBdev1", 00:09:36.488 "uuid": "7a4a28c8-97bd-4a46-9a36-a133cc18f31a", 00:09:36.488 "is_configured": true, 00:09:36.488 "data_offset": 0, 00:09:36.488 "data_size": 65536 00:09:36.488 }, 00:09:36.488 { 00:09:36.488 "name": "BaseBdev2", 00:09:36.488 "uuid": "5ba6f734-0139-4a5b-b0fb-c8027302c314", 00:09:36.488 "is_configured": true, 00:09:36.488 "data_offset": 0, 00:09:36.488 "data_size": 65536 00:09:36.488 }, 00:09:36.488 { 00:09:36.488 "name": "BaseBdev3", 00:09:36.488 "uuid": "6df93d73-5c5a-4ad7-9779-c161a80d5fad", 00:09:36.488 "is_configured": true, 00:09:36.488 "data_offset": 0, 00:09:36.488 "data_size": 65536 00:09:36.488 } 00:09:36.488 ] 00:09:36.488 }' 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.488 20:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.747 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.747 [2024-11-26 20:22:30.289275] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.016 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.016 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.016 "name": "Existed_Raid", 00:09:37.016 "aliases": [ 00:09:37.016 "1b11d1e2-ef22-422c-b248-e3e54f0d6385" 00:09:37.016 ], 00:09:37.016 "product_name": "Raid Volume", 00:09:37.016 "block_size": 512, 00:09:37.016 "num_blocks": 196608, 00:09:37.016 "uuid": "1b11d1e2-ef22-422c-b248-e3e54f0d6385", 00:09:37.016 "assigned_rate_limits": { 00:09:37.016 "rw_ios_per_sec": 0, 00:09:37.016 "rw_mbytes_per_sec": 0, 00:09:37.016 "r_mbytes_per_sec": 0, 00:09:37.016 "w_mbytes_per_sec": 0 00:09:37.016 }, 00:09:37.016 "claimed": false, 00:09:37.016 "zoned": false, 00:09:37.016 "supported_io_types": { 00:09:37.016 "read": true, 00:09:37.016 "write": true, 00:09:37.016 "unmap": true, 00:09:37.016 "flush": true, 00:09:37.016 "reset": true, 00:09:37.016 "nvme_admin": false, 00:09:37.016 "nvme_io": false, 00:09:37.016 "nvme_io_md": false, 00:09:37.016 "write_zeroes": true, 00:09:37.016 "zcopy": false, 00:09:37.016 "get_zone_info": false, 00:09:37.016 "zone_management": false, 00:09:37.016 "zone_append": false, 00:09:37.016 "compare": false, 00:09:37.016 "compare_and_write": false, 00:09:37.016 "abort": false, 00:09:37.016 "seek_hole": false, 00:09:37.016 "seek_data": false, 00:09:37.016 "copy": false, 00:09:37.016 "nvme_iov_md": false 00:09:37.016 }, 00:09:37.016 "memory_domains": [ 00:09:37.016 { 00:09:37.016 "dma_device_id": "system", 00:09:37.016 "dma_device_type": 1 00:09:37.016 }, 00:09:37.016 { 00:09:37.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.016 "dma_device_type": 2 00:09:37.016 }, 00:09:37.016 { 00:09:37.016 "dma_device_id": "system", 00:09:37.016 "dma_device_type": 1 00:09:37.016 }, 00:09:37.016 { 00:09:37.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.016 "dma_device_type": 2 00:09:37.016 }, 00:09:37.016 { 00:09:37.016 "dma_device_id": "system", 00:09:37.016 "dma_device_type": 1 00:09:37.016 }, 00:09:37.016 { 00:09:37.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.016 "dma_device_type": 2 00:09:37.016 } 00:09:37.016 ], 00:09:37.016 "driver_specific": { 00:09:37.016 "raid": { 00:09:37.016 "uuid": "1b11d1e2-ef22-422c-b248-e3e54f0d6385", 00:09:37.016 "strip_size_kb": 64, 00:09:37.016 "state": "online", 00:09:37.016 "raid_level": "concat", 00:09:37.016 "superblock": false, 00:09:37.016 "num_base_bdevs": 3, 00:09:37.016 "num_base_bdevs_discovered": 3, 00:09:37.016 "num_base_bdevs_operational": 3, 00:09:37.016 "base_bdevs_list": [ 00:09:37.016 { 00:09:37.016 "name": "BaseBdev1", 00:09:37.016 "uuid": "7a4a28c8-97bd-4a46-9a36-a133cc18f31a", 00:09:37.016 "is_configured": true, 00:09:37.016 "data_offset": 0, 00:09:37.016 "data_size": 65536 00:09:37.016 }, 00:09:37.016 { 00:09:37.016 "name": "BaseBdev2", 00:09:37.016 "uuid": "5ba6f734-0139-4a5b-b0fb-c8027302c314", 00:09:37.016 "is_configured": true, 00:09:37.016 "data_offset": 0, 00:09:37.016 "data_size": 65536 00:09:37.016 }, 00:09:37.016 { 00:09:37.016 "name": "BaseBdev3", 00:09:37.016 "uuid": "6df93d73-5c5a-4ad7-9779-c161a80d5fad", 00:09:37.016 "is_configured": true, 00:09:37.016 "data_offset": 0, 00:09:37.016 "data_size": 65536 00:09:37.016 } 00:09:37.016 ] 00:09:37.016 } 00:09:37.016 } 00:09:37.016 }' 00:09:37.016 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.016 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:37.016 BaseBdev2 00:09:37.016 BaseBdev3' 00:09:37.016 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.017 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.017 [2024-11-26 20:22:30.540571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.017 [2024-11-26 20:22:30.540622] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.017 [2024-11-26 20:22:30.540728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.290 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.290 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:37.290 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:37.290 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.290 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:37.290 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:37.290 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:37.290 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.291 "name": "Existed_Raid", 00:09:37.291 "uuid": "1b11d1e2-ef22-422c-b248-e3e54f0d6385", 00:09:37.291 "strip_size_kb": 64, 00:09:37.291 "state": "offline", 00:09:37.291 "raid_level": "concat", 00:09:37.291 "superblock": false, 00:09:37.291 "num_base_bdevs": 3, 00:09:37.291 "num_base_bdevs_discovered": 2, 00:09:37.291 "num_base_bdevs_operational": 2, 00:09:37.291 "base_bdevs_list": [ 00:09:37.291 { 00:09:37.291 "name": null, 00:09:37.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.291 "is_configured": false, 00:09:37.291 "data_offset": 0, 00:09:37.291 "data_size": 65536 00:09:37.291 }, 00:09:37.291 { 00:09:37.291 "name": "BaseBdev2", 00:09:37.291 "uuid": "5ba6f734-0139-4a5b-b0fb-c8027302c314", 00:09:37.291 "is_configured": true, 00:09:37.291 "data_offset": 0, 00:09:37.291 "data_size": 65536 00:09:37.291 }, 00:09:37.291 { 00:09:37.291 "name": "BaseBdev3", 00:09:37.291 "uuid": "6df93d73-5c5a-4ad7-9779-c161a80d5fad", 00:09:37.291 "is_configured": true, 00:09:37.291 "data_offset": 0, 00:09:37.291 "data_size": 65536 00:09:37.291 } 00:09:37.291 ] 00:09:37.291 }' 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.291 20:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.549 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:37.549 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.549 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.549 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.549 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.549 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.549 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.808 [2024-11-26 20:22:31.137880] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.808 [2024-11-26 20:22:31.197726] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.808 [2024-11-26 20:22:31.197915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.808 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.809 BaseBdev2 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.809 [ 00:09:37.809 { 00:09:37.809 "name": "BaseBdev2", 00:09:37.809 "aliases": [ 00:09:37.809 "eb780850-23d8-404b-b300-bd05e4a4f666" 00:09:37.809 ], 00:09:37.809 "product_name": "Malloc disk", 00:09:37.809 "block_size": 512, 00:09:37.809 "num_blocks": 65536, 00:09:37.809 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:37.809 "assigned_rate_limits": { 00:09:37.809 "rw_ios_per_sec": 0, 00:09:37.809 "rw_mbytes_per_sec": 0, 00:09:37.809 "r_mbytes_per_sec": 0, 00:09:37.809 "w_mbytes_per_sec": 0 00:09:37.809 }, 00:09:37.809 "claimed": false, 00:09:37.809 "zoned": false, 00:09:37.809 "supported_io_types": { 00:09:37.809 "read": true, 00:09:37.809 "write": true, 00:09:37.809 "unmap": true, 00:09:37.809 "flush": true, 00:09:37.809 "reset": true, 00:09:37.809 "nvme_admin": false, 00:09:37.809 "nvme_io": false, 00:09:37.809 "nvme_io_md": false, 00:09:37.809 "write_zeroes": true, 00:09:37.809 "zcopy": true, 00:09:37.809 "get_zone_info": false, 00:09:37.809 "zone_management": false, 00:09:37.809 "zone_append": false, 00:09:37.809 "compare": false, 00:09:37.809 "compare_and_write": false, 00:09:37.809 "abort": true, 00:09:37.809 "seek_hole": false, 00:09:37.809 "seek_data": false, 00:09:37.809 "copy": true, 00:09:37.809 "nvme_iov_md": false 00:09:37.809 }, 00:09:37.809 "memory_domains": [ 00:09:37.809 { 00:09:37.809 "dma_device_id": "system", 00:09:37.809 "dma_device_type": 1 00:09:37.809 }, 00:09:37.809 { 00:09:37.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.809 "dma_device_type": 2 00:09:37.809 } 00:09:37.809 ], 00:09:37.809 "driver_specific": {} 00:09:37.809 } 00:09:37.809 ] 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.809 BaseBdev3 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.809 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.809 [ 00:09:37.809 { 00:09:37.809 "name": "BaseBdev3", 00:09:37.809 "aliases": [ 00:09:37.809 "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4" 00:09:37.809 ], 00:09:37.809 "product_name": "Malloc disk", 00:09:37.809 "block_size": 512, 00:09:37.809 "num_blocks": 65536, 00:09:37.809 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:37.809 "assigned_rate_limits": { 00:09:37.809 "rw_ios_per_sec": 0, 00:09:37.809 "rw_mbytes_per_sec": 0, 00:09:38.069 "r_mbytes_per_sec": 0, 00:09:38.069 "w_mbytes_per_sec": 0 00:09:38.069 }, 00:09:38.069 "claimed": false, 00:09:38.069 "zoned": false, 00:09:38.069 "supported_io_types": { 00:09:38.069 "read": true, 00:09:38.069 "write": true, 00:09:38.069 "unmap": true, 00:09:38.069 "flush": true, 00:09:38.069 "reset": true, 00:09:38.069 "nvme_admin": false, 00:09:38.069 "nvme_io": false, 00:09:38.069 "nvme_io_md": false, 00:09:38.069 "write_zeroes": true, 00:09:38.069 "zcopy": true, 00:09:38.069 "get_zone_info": false, 00:09:38.069 "zone_management": false, 00:09:38.069 "zone_append": false, 00:09:38.069 "compare": false, 00:09:38.069 "compare_and_write": false, 00:09:38.069 "abort": true, 00:09:38.069 "seek_hole": false, 00:09:38.069 "seek_data": false, 00:09:38.069 "copy": true, 00:09:38.069 "nvme_iov_md": false 00:09:38.069 }, 00:09:38.069 "memory_domains": [ 00:09:38.069 { 00:09:38.069 "dma_device_id": "system", 00:09:38.069 "dma_device_type": 1 00:09:38.069 }, 00:09:38.069 { 00:09:38.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.069 "dma_device_type": 2 00:09:38.069 } 00:09:38.069 ], 00:09:38.069 "driver_specific": {} 00:09:38.069 } 00:09:38.069 ] 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.069 [2024-11-26 20:22:31.373984] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.069 [2024-11-26 20:22:31.374192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.069 [2024-11-26 20:22:31.374341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.069 [2024-11-26 20:22:31.378740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.069 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.069 "name": "Existed_Raid", 00:09:38.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.069 "strip_size_kb": 64, 00:09:38.069 "state": "configuring", 00:09:38.069 "raid_level": "concat", 00:09:38.069 "superblock": false, 00:09:38.069 "num_base_bdevs": 3, 00:09:38.069 "num_base_bdevs_discovered": 2, 00:09:38.069 "num_base_bdevs_operational": 3, 00:09:38.069 "base_bdevs_list": [ 00:09:38.069 { 00:09:38.069 "name": "BaseBdev1", 00:09:38.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.069 "is_configured": false, 00:09:38.069 "data_offset": 0, 00:09:38.069 "data_size": 0 00:09:38.069 }, 00:09:38.069 { 00:09:38.069 "name": "BaseBdev2", 00:09:38.069 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:38.069 "is_configured": true, 00:09:38.069 "data_offset": 0, 00:09:38.069 "data_size": 65536 00:09:38.069 }, 00:09:38.069 { 00:09:38.069 "name": "BaseBdev3", 00:09:38.069 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:38.069 "is_configured": true, 00:09:38.069 "data_offset": 0, 00:09:38.070 "data_size": 65536 00:09:38.070 } 00:09:38.070 ] 00:09:38.070 }' 00:09:38.070 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.070 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.329 [2024-11-26 20:22:31.822380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.329 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.589 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.589 "name": "Existed_Raid", 00:09:38.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.589 "strip_size_kb": 64, 00:09:38.589 "state": "configuring", 00:09:38.589 "raid_level": "concat", 00:09:38.589 "superblock": false, 00:09:38.589 "num_base_bdevs": 3, 00:09:38.589 "num_base_bdevs_discovered": 1, 00:09:38.589 "num_base_bdevs_operational": 3, 00:09:38.589 "base_bdevs_list": [ 00:09:38.589 { 00:09:38.589 "name": "BaseBdev1", 00:09:38.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.589 "is_configured": false, 00:09:38.589 "data_offset": 0, 00:09:38.589 "data_size": 0 00:09:38.589 }, 00:09:38.589 { 00:09:38.589 "name": null, 00:09:38.589 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:38.589 "is_configured": false, 00:09:38.589 "data_offset": 0, 00:09:38.589 "data_size": 65536 00:09:38.589 }, 00:09:38.589 { 00:09:38.589 "name": "BaseBdev3", 00:09:38.589 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:38.589 "is_configured": true, 00:09:38.589 "data_offset": 0, 00:09:38.589 "data_size": 65536 00:09:38.589 } 00:09:38.589 ] 00:09:38.589 }' 00:09:38.589 20:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.589 20:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 [2024-11-26 20:22:32.294490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.849 BaseBdev1 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.849 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.849 [ 00:09:38.849 { 00:09:38.849 "name": "BaseBdev1", 00:09:38.849 "aliases": [ 00:09:38.849 "f5003936-e578-4a38-8fb9-122920f4a0fe" 00:09:38.849 ], 00:09:38.849 "product_name": "Malloc disk", 00:09:38.849 "block_size": 512, 00:09:38.849 "num_blocks": 65536, 00:09:38.849 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:38.849 "assigned_rate_limits": { 00:09:38.849 "rw_ios_per_sec": 0, 00:09:38.849 "rw_mbytes_per_sec": 0, 00:09:38.849 "r_mbytes_per_sec": 0, 00:09:38.849 "w_mbytes_per_sec": 0 00:09:38.849 }, 00:09:38.849 "claimed": true, 00:09:38.849 "claim_type": "exclusive_write", 00:09:38.849 "zoned": false, 00:09:38.849 "supported_io_types": { 00:09:38.849 "read": true, 00:09:38.849 "write": true, 00:09:38.849 "unmap": true, 00:09:38.849 "flush": true, 00:09:38.849 "reset": true, 00:09:38.849 "nvme_admin": false, 00:09:38.849 "nvme_io": false, 00:09:38.849 "nvme_io_md": false, 00:09:38.849 "write_zeroes": true, 00:09:38.849 "zcopy": true, 00:09:38.849 "get_zone_info": false, 00:09:38.849 "zone_management": false, 00:09:38.849 "zone_append": false, 00:09:38.849 "compare": false, 00:09:38.849 "compare_and_write": false, 00:09:38.849 "abort": true, 00:09:38.849 "seek_hole": false, 00:09:38.849 "seek_data": false, 00:09:38.849 "copy": true, 00:09:38.849 "nvme_iov_md": false 00:09:38.849 }, 00:09:38.849 "memory_domains": [ 00:09:38.849 { 00:09:38.850 "dma_device_id": "system", 00:09:38.850 "dma_device_type": 1 00:09:38.850 }, 00:09:38.850 { 00:09:38.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.850 "dma_device_type": 2 00:09:38.850 } 00:09:38.850 ], 00:09:38.850 "driver_specific": {} 00:09:38.850 } 00:09:38.850 ] 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.850 "name": "Existed_Raid", 00:09:38.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.850 "strip_size_kb": 64, 00:09:38.850 "state": "configuring", 00:09:38.850 "raid_level": "concat", 00:09:38.850 "superblock": false, 00:09:38.850 "num_base_bdevs": 3, 00:09:38.850 "num_base_bdevs_discovered": 2, 00:09:38.850 "num_base_bdevs_operational": 3, 00:09:38.850 "base_bdevs_list": [ 00:09:38.850 { 00:09:38.850 "name": "BaseBdev1", 00:09:38.850 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:38.850 "is_configured": true, 00:09:38.850 "data_offset": 0, 00:09:38.850 "data_size": 65536 00:09:38.850 }, 00:09:38.850 { 00:09:38.850 "name": null, 00:09:38.850 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:38.850 "is_configured": false, 00:09:38.850 "data_offset": 0, 00:09:38.850 "data_size": 65536 00:09:38.850 }, 00:09:38.850 { 00:09:38.850 "name": "BaseBdev3", 00:09:38.850 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:38.850 "is_configured": true, 00:09:38.850 "data_offset": 0, 00:09:38.850 "data_size": 65536 00:09:38.850 } 00:09:38.850 ] 00:09:38.850 }' 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.850 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.421 [2024-11-26 20:22:32.861621] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.421 "name": "Existed_Raid", 00:09:39.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.421 "strip_size_kb": 64, 00:09:39.421 "state": "configuring", 00:09:39.421 "raid_level": "concat", 00:09:39.421 "superblock": false, 00:09:39.421 "num_base_bdevs": 3, 00:09:39.421 "num_base_bdevs_discovered": 1, 00:09:39.421 "num_base_bdevs_operational": 3, 00:09:39.421 "base_bdevs_list": [ 00:09:39.421 { 00:09:39.421 "name": "BaseBdev1", 00:09:39.421 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:39.421 "is_configured": true, 00:09:39.421 "data_offset": 0, 00:09:39.421 "data_size": 65536 00:09:39.421 }, 00:09:39.421 { 00:09:39.421 "name": null, 00:09:39.421 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:39.421 "is_configured": false, 00:09:39.421 "data_offset": 0, 00:09:39.421 "data_size": 65536 00:09:39.421 }, 00:09:39.421 { 00:09:39.421 "name": null, 00:09:39.421 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:39.421 "is_configured": false, 00:09:39.421 "data_offset": 0, 00:09:39.421 "data_size": 65536 00:09:39.421 } 00:09:39.421 ] 00:09:39.421 }' 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.421 20:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.992 [2024-11-26 20:22:33.360844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.992 "name": "Existed_Raid", 00:09:39.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.992 "strip_size_kb": 64, 00:09:39.992 "state": "configuring", 00:09:39.992 "raid_level": "concat", 00:09:39.992 "superblock": false, 00:09:39.992 "num_base_bdevs": 3, 00:09:39.992 "num_base_bdevs_discovered": 2, 00:09:39.992 "num_base_bdevs_operational": 3, 00:09:39.992 "base_bdevs_list": [ 00:09:39.992 { 00:09:39.992 "name": "BaseBdev1", 00:09:39.992 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:39.992 "is_configured": true, 00:09:39.992 "data_offset": 0, 00:09:39.992 "data_size": 65536 00:09:39.992 }, 00:09:39.992 { 00:09:39.992 "name": null, 00:09:39.992 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:39.992 "is_configured": false, 00:09:39.992 "data_offset": 0, 00:09:39.992 "data_size": 65536 00:09:39.992 }, 00:09:39.992 { 00:09:39.992 "name": "BaseBdev3", 00:09:39.992 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:39.992 "is_configured": true, 00:09:39.992 "data_offset": 0, 00:09:39.992 "data_size": 65536 00:09:39.992 } 00:09:39.992 ] 00:09:39.992 }' 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.992 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.278 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.278 [2024-11-26 20:22:33.824091] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.536 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.537 "name": "Existed_Raid", 00:09:40.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.537 "strip_size_kb": 64, 00:09:40.537 "state": "configuring", 00:09:40.537 "raid_level": "concat", 00:09:40.537 "superblock": false, 00:09:40.537 "num_base_bdevs": 3, 00:09:40.537 "num_base_bdevs_discovered": 1, 00:09:40.537 "num_base_bdevs_operational": 3, 00:09:40.537 "base_bdevs_list": [ 00:09:40.537 { 00:09:40.537 "name": null, 00:09:40.537 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:40.537 "is_configured": false, 00:09:40.537 "data_offset": 0, 00:09:40.537 "data_size": 65536 00:09:40.537 }, 00:09:40.537 { 00:09:40.537 "name": null, 00:09:40.537 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:40.537 "is_configured": false, 00:09:40.537 "data_offset": 0, 00:09:40.537 "data_size": 65536 00:09:40.537 }, 00:09:40.537 { 00:09:40.537 "name": "BaseBdev3", 00:09:40.537 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:40.537 "is_configured": true, 00:09:40.537 "data_offset": 0, 00:09:40.537 "data_size": 65536 00:09:40.537 } 00:09:40.537 ] 00:09:40.537 }' 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.537 20:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.796 [2024-11-26 20:22:34.308780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.796 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.055 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.055 "name": "Existed_Raid", 00:09:41.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.055 "strip_size_kb": 64, 00:09:41.055 "state": "configuring", 00:09:41.055 "raid_level": "concat", 00:09:41.055 "superblock": false, 00:09:41.055 "num_base_bdevs": 3, 00:09:41.055 "num_base_bdevs_discovered": 2, 00:09:41.055 "num_base_bdevs_operational": 3, 00:09:41.055 "base_bdevs_list": [ 00:09:41.055 { 00:09:41.055 "name": null, 00:09:41.055 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:41.055 "is_configured": false, 00:09:41.055 "data_offset": 0, 00:09:41.055 "data_size": 65536 00:09:41.055 }, 00:09:41.055 { 00:09:41.055 "name": "BaseBdev2", 00:09:41.055 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:41.055 "is_configured": true, 00:09:41.055 "data_offset": 0, 00:09:41.055 "data_size": 65536 00:09:41.055 }, 00:09:41.055 { 00:09:41.055 "name": "BaseBdev3", 00:09:41.055 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:41.055 "is_configured": true, 00:09:41.055 "data_offset": 0, 00:09:41.055 "data_size": 65536 00:09:41.055 } 00:09:41.055 ] 00:09:41.055 }' 00:09:41.055 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.055 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.315 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f5003936-e578-4a38-8fb9-122920f4a0fe 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.575 [2024-11-26 20:22:34.916753] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:41.575 [2024-11-26 20:22:34.916798] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:41.575 [2024-11-26 20:22:34.916808] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:41.575 [2024-11-26 20:22:34.917065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:41.575 [2024-11-26 20:22:34.917189] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:41.575 [2024-11-26 20:22:34.917198] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:41.575 [2024-11-26 20:22:34.917440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.575 NewBaseBdev 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.575 [ 00:09:41.575 { 00:09:41.575 "name": "NewBaseBdev", 00:09:41.575 "aliases": [ 00:09:41.575 "f5003936-e578-4a38-8fb9-122920f4a0fe" 00:09:41.575 ], 00:09:41.575 "product_name": "Malloc disk", 00:09:41.575 "block_size": 512, 00:09:41.575 "num_blocks": 65536, 00:09:41.575 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:41.575 "assigned_rate_limits": { 00:09:41.575 "rw_ios_per_sec": 0, 00:09:41.575 "rw_mbytes_per_sec": 0, 00:09:41.575 "r_mbytes_per_sec": 0, 00:09:41.575 "w_mbytes_per_sec": 0 00:09:41.575 }, 00:09:41.575 "claimed": true, 00:09:41.575 "claim_type": "exclusive_write", 00:09:41.575 "zoned": false, 00:09:41.575 "supported_io_types": { 00:09:41.575 "read": true, 00:09:41.575 "write": true, 00:09:41.575 "unmap": true, 00:09:41.575 "flush": true, 00:09:41.575 "reset": true, 00:09:41.575 "nvme_admin": false, 00:09:41.575 "nvme_io": false, 00:09:41.575 "nvme_io_md": false, 00:09:41.575 "write_zeroes": true, 00:09:41.575 "zcopy": true, 00:09:41.575 "get_zone_info": false, 00:09:41.575 "zone_management": false, 00:09:41.575 "zone_append": false, 00:09:41.575 "compare": false, 00:09:41.575 "compare_and_write": false, 00:09:41.575 "abort": true, 00:09:41.575 "seek_hole": false, 00:09:41.575 "seek_data": false, 00:09:41.575 "copy": true, 00:09:41.575 "nvme_iov_md": false 00:09:41.575 }, 00:09:41.575 "memory_domains": [ 00:09:41.575 { 00:09:41.575 "dma_device_id": "system", 00:09:41.575 "dma_device_type": 1 00:09:41.575 }, 00:09:41.575 { 00:09:41.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.575 "dma_device_type": 2 00:09:41.575 } 00:09:41.575 ], 00:09:41.575 "driver_specific": {} 00:09:41.575 } 00:09:41.575 ] 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.575 20:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.576 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.576 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.576 20:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.576 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.576 "name": "Existed_Raid", 00:09:41.576 "uuid": "aa9f72aa-35cb-403c-8167-34cee573cc96", 00:09:41.576 "strip_size_kb": 64, 00:09:41.576 "state": "online", 00:09:41.576 "raid_level": "concat", 00:09:41.576 "superblock": false, 00:09:41.576 "num_base_bdevs": 3, 00:09:41.576 "num_base_bdevs_discovered": 3, 00:09:41.576 "num_base_bdevs_operational": 3, 00:09:41.576 "base_bdevs_list": [ 00:09:41.576 { 00:09:41.576 "name": "NewBaseBdev", 00:09:41.576 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:41.576 "is_configured": true, 00:09:41.576 "data_offset": 0, 00:09:41.576 "data_size": 65536 00:09:41.576 }, 00:09:41.576 { 00:09:41.576 "name": "BaseBdev2", 00:09:41.576 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:41.576 "is_configured": true, 00:09:41.576 "data_offset": 0, 00:09:41.576 "data_size": 65536 00:09:41.576 }, 00:09:41.576 { 00:09:41.576 "name": "BaseBdev3", 00:09:41.576 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:41.576 "is_configured": true, 00:09:41.576 "data_offset": 0, 00:09:41.576 "data_size": 65536 00:09:41.576 } 00:09:41.576 ] 00:09:41.576 }' 00:09:41.576 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.576 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.146 [2024-11-26 20:22:35.420473] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.146 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.146 "name": "Existed_Raid", 00:09:42.146 "aliases": [ 00:09:42.146 "aa9f72aa-35cb-403c-8167-34cee573cc96" 00:09:42.146 ], 00:09:42.146 "product_name": "Raid Volume", 00:09:42.146 "block_size": 512, 00:09:42.146 "num_blocks": 196608, 00:09:42.146 "uuid": "aa9f72aa-35cb-403c-8167-34cee573cc96", 00:09:42.146 "assigned_rate_limits": { 00:09:42.146 "rw_ios_per_sec": 0, 00:09:42.146 "rw_mbytes_per_sec": 0, 00:09:42.146 "r_mbytes_per_sec": 0, 00:09:42.146 "w_mbytes_per_sec": 0 00:09:42.146 }, 00:09:42.146 "claimed": false, 00:09:42.146 "zoned": false, 00:09:42.146 "supported_io_types": { 00:09:42.146 "read": true, 00:09:42.146 "write": true, 00:09:42.146 "unmap": true, 00:09:42.146 "flush": true, 00:09:42.146 "reset": true, 00:09:42.146 "nvme_admin": false, 00:09:42.146 "nvme_io": false, 00:09:42.146 "nvme_io_md": false, 00:09:42.146 "write_zeroes": true, 00:09:42.146 "zcopy": false, 00:09:42.146 "get_zone_info": false, 00:09:42.146 "zone_management": false, 00:09:42.146 "zone_append": false, 00:09:42.146 "compare": false, 00:09:42.146 "compare_and_write": false, 00:09:42.146 "abort": false, 00:09:42.146 "seek_hole": false, 00:09:42.146 "seek_data": false, 00:09:42.146 "copy": false, 00:09:42.146 "nvme_iov_md": false 00:09:42.146 }, 00:09:42.146 "memory_domains": [ 00:09:42.146 { 00:09:42.146 "dma_device_id": "system", 00:09:42.146 "dma_device_type": 1 00:09:42.146 }, 00:09:42.146 { 00:09:42.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.146 "dma_device_type": 2 00:09:42.146 }, 00:09:42.146 { 00:09:42.146 "dma_device_id": "system", 00:09:42.146 "dma_device_type": 1 00:09:42.146 }, 00:09:42.146 { 00:09:42.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.146 "dma_device_type": 2 00:09:42.146 }, 00:09:42.146 { 00:09:42.146 "dma_device_id": "system", 00:09:42.146 "dma_device_type": 1 00:09:42.146 }, 00:09:42.146 { 00:09:42.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.146 "dma_device_type": 2 00:09:42.146 } 00:09:42.146 ], 00:09:42.146 "driver_specific": { 00:09:42.146 "raid": { 00:09:42.146 "uuid": "aa9f72aa-35cb-403c-8167-34cee573cc96", 00:09:42.146 "strip_size_kb": 64, 00:09:42.146 "state": "online", 00:09:42.146 "raid_level": "concat", 00:09:42.146 "superblock": false, 00:09:42.146 "num_base_bdevs": 3, 00:09:42.146 "num_base_bdevs_discovered": 3, 00:09:42.147 "num_base_bdevs_operational": 3, 00:09:42.147 "base_bdevs_list": [ 00:09:42.147 { 00:09:42.147 "name": "NewBaseBdev", 00:09:42.147 "uuid": "f5003936-e578-4a38-8fb9-122920f4a0fe", 00:09:42.147 "is_configured": true, 00:09:42.147 "data_offset": 0, 00:09:42.147 "data_size": 65536 00:09:42.147 }, 00:09:42.147 { 00:09:42.147 "name": "BaseBdev2", 00:09:42.147 "uuid": "eb780850-23d8-404b-b300-bd05e4a4f666", 00:09:42.147 "is_configured": true, 00:09:42.147 "data_offset": 0, 00:09:42.147 "data_size": 65536 00:09:42.147 }, 00:09:42.147 { 00:09:42.147 "name": "BaseBdev3", 00:09:42.147 "uuid": "7ba8716c-91a5-4cfe-95af-a9f6debbd3f4", 00:09:42.147 "is_configured": true, 00:09:42.147 "data_offset": 0, 00:09:42.147 "data_size": 65536 00:09:42.147 } 00:09:42.147 ] 00:09:42.147 } 00:09:42.147 } 00:09:42.147 }' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:42.147 BaseBdev2 00:09:42.147 BaseBdev3' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.147 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.147 [2024-11-26 20:22:35.691663] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.147 [2024-11-26 20:22:35.691769] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.147 [2024-11-26 20:22:35.691901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.147 [2024-11-26 20:22:35.692015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.147 [2024-11-26 20:22:35.692077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77162 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 77162 ']' 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 77162 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77162 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.406 killing process with pid 77162 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77162' 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 77162 00:09:42.406 [2024-11-26 20:22:35.743175] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.406 20:22:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 77162 00:09:42.406 [2024-11-26 20:22:35.793271] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.664 20:22:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.664 00:09:42.664 real 0m9.225s 00:09:42.664 user 0m15.521s 00:09:42.664 sys 0m1.918s 00:09:42.664 20:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.664 ************************************ 00:09:42.664 END TEST raid_state_function_test 00:09:42.665 ************************************ 00:09:42.665 20:22:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.923 20:22:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:42.923 20:22:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:42.923 20:22:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.923 20:22:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.923 ************************************ 00:09:42.923 START TEST raid_state_function_test_sb 00:09:42.923 ************************************ 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77772 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77772' 00:09:42.923 Process raid pid: 77772 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77772 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77772 ']' 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.923 20:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.923 [2024-11-26 20:22:36.346354] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:42.923 [2024-11-26 20:22:36.346518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.182 [2024-11-26 20:22:36.508548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.182 [2024-11-26 20:22:36.590491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.182 [2024-11-26 20:22:36.670238] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.182 [2024-11-26 20:22:36.670278] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.749 [2024-11-26 20:22:37.258093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.749 [2024-11-26 20:22:37.258243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.749 [2024-11-26 20:22:37.258265] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.749 [2024-11-26 20:22:37.258277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.749 [2024-11-26 20:22:37.258284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.749 [2024-11-26 20:22:37.258297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.749 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.750 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.750 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.750 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.008 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.008 "name": "Existed_Raid", 00:09:44.008 "uuid": "37ab79a4-fb22-4b30-ae2c-17c5febc6da8", 00:09:44.008 "strip_size_kb": 64, 00:09:44.008 "state": "configuring", 00:09:44.008 "raid_level": "concat", 00:09:44.008 "superblock": true, 00:09:44.008 "num_base_bdevs": 3, 00:09:44.008 "num_base_bdevs_discovered": 0, 00:09:44.008 "num_base_bdevs_operational": 3, 00:09:44.008 "base_bdevs_list": [ 00:09:44.008 { 00:09:44.008 "name": "BaseBdev1", 00:09:44.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.008 "is_configured": false, 00:09:44.008 "data_offset": 0, 00:09:44.008 "data_size": 0 00:09:44.008 }, 00:09:44.008 { 00:09:44.008 "name": "BaseBdev2", 00:09:44.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.008 "is_configured": false, 00:09:44.008 "data_offset": 0, 00:09:44.008 "data_size": 0 00:09:44.008 }, 00:09:44.008 { 00:09:44.008 "name": "BaseBdev3", 00:09:44.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.008 "is_configured": false, 00:09:44.008 "data_offset": 0, 00:09:44.008 "data_size": 0 00:09:44.008 } 00:09:44.008 ] 00:09:44.008 }' 00:09:44.008 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.008 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.267 [2024-11-26 20:22:37.741155] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.267 [2024-11-26 20:22:37.741292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.267 [2024-11-26 20:22:37.753209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.267 [2024-11-26 20:22:37.753348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.267 [2024-11-26 20:22:37.753384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.267 [2024-11-26 20:22:37.753411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.267 [2024-11-26 20:22:37.753448] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.267 [2024-11-26 20:22:37.753479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.267 [2024-11-26 20:22:37.776928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.267 BaseBdev1 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.267 [ 00:09:44.267 { 00:09:44.267 "name": "BaseBdev1", 00:09:44.267 "aliases": [ 00:09:44.267 "3871beda-f74c-46a0-98a6-2df80430e61f" 00:09:44.267 ], 00:09:44.267 "product_name": "Malloc disk", 00:09:44.267 "block_size": 512, 00:09:44.267 "num_blocks": 65536, 00:09:44.267 "uuid": "3871beda-f74c-46a0-98a6-2df80430e61f", 00:09:44.267 "assigned_rate_limits": { 00:09:44.267 "rw_ios_per_sec": 0, 00:09:44.267 "rw_mbytes_per_sec": 0, 00:09:44.267 "r_mbytes_per_sec": 0, 00:09:44.267 "w_mbytes_per_sec": 0 00:09:44.267 }, 00:09:44.267 "claimed": true, 00:09:44.267 "claim_type": "exclusive_write", 00:09:44.267 "zoned": false, 00:09:44.267 "supported_io_types": { 00:09:44.267 "read": true, 00:09:44.267 "write": true, 00:09:44.267 "unmap": true, 00:09:44.267 "flush": true, 00:09:44.267 "reset": true, 00:09:44.267 "nvme_admin": false, 00:09:44.267 "nvme_io": false, 00:09:44.267 "nvme_io_md": false, 00:09:44.267 "write_zeroes": true, 00:09:44.267 "zcopy": true, 00:09:44.267 "get_zone_info": false, 00:09:44.267 "zone_management": false, 00:09:44.267 "zone_append": false, 00:09:44.267 "compare": false, 00:09:44.267 "compare_and_write": false, 00:09:44.267 "abort": true, 00:09:44.267 "seek_hole": false, 00:09:44.267 "seek_data": false, 00:09:44.267 "copy": true, 00:09:44.267 "nvme_iov_md": false 00:09:44.267 }, 00:09:44.267 "memory_domains": [ 00:09:44.267 { 00:09:44.267 "dma_device_id": "system", 00:09:44.267 "dma_device_type": 1 00:09:44.267 }, 00:09:44.267 { 00:09:44.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.267 "dma_device_type": 2 00:09:44.267 } 00:09:44.267 ], 00:09:44.267 "driver_specific": {} 00:09:44.267 } 00:09:44.267 ] 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.267 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.527 "name": "Existed_Raid", 00:09:44.527 "uuid": "f7c5a3f4-0363-4582-8053-d7ad44af8dc3", 00:09:44.527 "strip_size_kb": 64, 00:09:44.527 "state": "configuring", 00:09:44.527 "raid_level": "concat", 00:09:44.527 "superblock": true, 00:09:44.527 "num_base_bdevs": 3, 00:09:44.527 "num_base_bdevs_discovered": 1, 00:09:44.527 "num_base_bdevs_operational": 3, 00:09:44.527 "base_bdevs_list": [ 00:09:44.527 { 00:09:44.527 "name": "BaseBdev1", 00:09:44.527 "uuid": "3871beda-f74c-46a0-98a6-2df80430e61f", 00:09:44.527 "is_configured": true, 00:09:44.527 "data_offset": 2048, 00:09:44.527 "data_size": 63488 00:09:44.527 }, 00:09:44.527 { 00:09:44.527 "name": "BaseBdev2", 00:09:44.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.527 "is_configured": false, 00:09:44.527 "data_offset": 0, 00:09:44.527 "data_size": 0 00:09:44.527 }, 00:09:44.527 { 00:09:44.527 "name": "BaseBdev3", 00:09:44.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.527 "is_configured": false, 00:09:44.527 "data_offset": 0, 00:09:44.527 "data_size": 0 00:09:44.527 } 00:09:44.527 ] 00:09:44.527 }' 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.527 20:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.785 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.785 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.785 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.785 [2024-11-26 20:22:38.240345] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.786 [2024-11-26 20:22:38.240411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.786 [2024-11-26 20:22:38.248363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.786 [2024-11-26 20:22:38.250314] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.786 [2024-11-26 20:22:38.250406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.786 [2024-11-26 20:22:38.250420] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.786 [2024-11-26 20:22:38.250432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.786 "name": "Existed_Raid", 00:09:44.786 "uuid": "9c5e6ebf-ea1f-42f7-87f1-3f869d4ad79b", 00:09:44.786 "strip_size_kb": 64, 00:09:44.786 "state": "configuring", 00:09:44.786 "raid_level": "concat", 00:09:44.786 "superblock": true, 00:09:44.786 "num_base_bdevs": 3, 00:09:44.786 "num_base_bdevs_discovered": 1, 00:09:44.786 "num_base_bdevs_operational": 3, 00:09:44.786 "base_bdevs_list": [ 00:09:44.786 { 00:09:44.786 "name": "BaseBdev1", 00:09:44.786 "uuid": "3871beda-f74c-46a0-98a6-2df80430e61f", 00:09:44.786 "is_configured": true, 00:09:44.786 "data_offset": 2048, 00:09:44.786 "data_size": 63488 00:09:44.786 }, 00:09:44.786 { 00:09:44.786 "name": "BaseBdev2", 00:09:44.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.786 "is_configured": false, 00:09:44.786 "data_offset": 0, 00:09:44.786 "data_size": 0 00:09:44.786 }, 00:09:44.786 { 00:09:44.786 "name": "BaseBdev3", 00:09:44.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.786 "is_configured": false, 00:09:44.786 "data_offset": 0, 00:09:44.786 "data_size": 0 00:09:44.786 } 00:09:44.786 ] 00:09:44.786 }' 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.786 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.353 [2024-11-26 20:22:38.729995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.353 BaseBdev2 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.353 [ 00:09:45.353 { 00:09:45.353 "name": "BaseBdev2", 00:09:45.353 "aliases": [ 00:09:45.353 "0c41c186-74bf-45d1-94c6-701d679e795d" 00:09:45.353 ], 00:09:45.353 "product_name": "Malloc disk", 00:09:45.353 "block_size": 512, 00:09:45.353 "num_blocks": 65536, 00:09:45.353 "uuid": "0c41c186-74bf-45d1-94c6-701d679e795d", 00:09:45.353 "assigned_rate_limits": { 00:09:45.353 "rw_ios_per_sec": 0, 00:09:45.353 "rw_mbytes_per_sec": 0, 00:09:45.353 "r_mbytes_per_sec": 0, 00:09:45.353 "w_mbytes_per_sec": 0 00:09:45.353 }, 00:09:45.353 "claimed": true, 00:09:45.353 "claim_type": "exclusive_write", 00:09:45.353 "zoned": false, 00:09:45.353 "supported_io_types": { 00:09:45.353 "read": true, 00:09:45.353 "write": true, 00:09:45.353 "unmap": true, 00:09:45.353 "flush": true, 00:09:45.353 "reset": true, 00:09:45.353 "nvme_admin": false, 00:09:45.353 "nvme_io": false, 00:09:45.353 "nvme_io_md": false, 00:09:45.353 "write_zeroes": true, 00:09:45.353 "zcopy": true, 00:09:45.353 "get_zone_info": false, 00:09:45.353 "zone_management": false, 00:09:45.353 "zone_append": false, 00:09:45.353 "compare": false, 00:09:45.353 "compare_and_write": false, 00:09:45.353 "abort": true, 00:09:45.353 "seek_hole": false, 00:09:45.353 "seek_data": false, 00:09:45.353 "copy": true, 00:09:45.353 "nvme_iov_md": false 00:09:45.353 }, 00:09:45.353 "memory_domains": [ 00:09:45.353 { 00:09:45.353 "dma_device_id": "system", 00:09:45.353 "dma_device_type": 1 00:09:45.353 }, 00:09:45.353 { 00:09:45.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.353 "dma_device_type": 2 00:09:45.353 } 00:09:45.353 ], 00:09:45.353 "driver_specific": {} 00:09:45.353 } 00:09:45.353 ] 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.353 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.354 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.354 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.354 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.354 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.354 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.354 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.354 "name": "Existed_Raid", 00:09:45.354 "uuid": "9c5e6ebf-ea1f-42f7-87f1-3f869d4ad79b", 00:09:45.354 "strip_size_kb": 64, 00:09:45.354 "state": "configuring", 00:09:45.354 "raid_level": "concat", 00:09:45.354 "superblock": true, 00:09:45.354 "num_base_bdevs": 3, 00:09:45.354 "num_base_bdevs_discovered": 2, 00:09:45.354 "num_base_bdevs_operational": 3, 00:09:45.354 "base_bdevs_list": [ 00:09:45.354 { 00:09:45.354 "name": "BaseBdev1", 00:09:45.354 "uuid": "3871beda-f74c-46a0-98a6-2df80430e61f", 00:09:45.354 "is_configured": true, 00:09:45.354 "data_offset": 2048, 00:09:45.354 "data_size": 63488 00:09:45.354 }, 00:09:45.354 { 00:09:45.354 "name": "BaseBdev2", 00:09:45.354 "uuid": "0c41c186-74bf-45d1-94c6-701d679e795d", 00:09:45.354 "is_configured": true, 00:09:45.354 "data_offset": 2048, 00:09:45.354 "data_size": 63488 00:09:45.354 }, 00:09:45.354 { 00:09:45.354 "name": "BaseBdev3", 00:09:45.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.354 "is_configured": false, 00:09:45.354 "data_offset": 0, 00:09:45.354 "data_size": 0 00:09:45.354 } 00:09:45.354 ] 00:09:45.354 }' 00:09:45.354 20:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.354 20:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.922 BaseBdev3 00:09:45.922 [2024-11-26 20:22:39.230243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.922 [2024-11-26 20:22:39.230463] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:45.922 [2024-11-26 20:22:39.230485] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.922 [2024-11-26 20:22:39.230809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:45.922 [2024-11-26 20:22:39.230936] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:45.922 [2024-11-26 20:22:39.230954] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:45.922 [2024-11-26 20:22:39.231065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.922 [ 00:09:45.922 { 00:09:45.922 "name": "BaseBdev3", 00:09:45.922 "aliases": [ 00:09:45.922 "bfb560de-95dd-4c9a-828b-243bccfcd0a5" 00:09:45.922 ], 00:09:45.922 "product_name": "Malloc disk", 00:09:45.922 "block_size": 512, 00:09:45.922 "num_blocks": 65536, 00:09:45.922 "uuid": "bfb560de-95dd-4c9a-828b-243bccfcd0a5", 00:09:45.922 "assigned_rate_limits": { 00:09:45.922 "rw_ios_per_sec": 0, 00:09:45.922 "rw_mbytes_per_sec": 0, 00:09:45.922 "r_mbytes_per_sec": 0, 00:09:45.922 "w_mbytes_per_sec": 0 00:09:45.922 }, 00:09:45.922 "claimed": true, 00:09:45.922 "claim_type": "exclusive_write", 00:09:45.922 "zoned": false, 00:09:45.922 "supported_io_types": { 00:09:45.922 "read": true, 00:09:45.922 "write": true, 00:09:45.922 "unmap": true, 00:09:45.922 "flush": true, 00:09:45.922 "reset": true, 00:09:45.922 "nvme_admin": false, 00:09:45.922 "nvme_io": false, 00:09:45.922 "nvme_io_md": false, 00:09:45.922 "write_zeroes": true, 00:09:45.922 "zcopy": true, 00:09:45.922 "get_zone_info": false, 00:09:45.922 "zone_management": false, 00:09:45.922 "zone_append": false, 00:09:45.922 "compare": false, 00:09:45.922 "compare_and_write": false, 00:09:45.922 "abort": true, 00:09:45.922 "seek_hole": false, 00:09:45.922 "seek_data": false, 00:09:45.922 "copy": true, 00:09:45.922 "nvme_iov_md": false 00:09:45.922 }, 00:09:45.922 "memory_domains": [ 00:09:45.922 { 00:09:45.922 "dma_device_id": "system", 00:09:45.922 "dma_device_type": 1 00:09:45.922 }, 00:09:45.922 { 00:09:45.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.922 "dma_device_type": 2 00:09:45.922 } 00:09:45.922 ], 00:09:45.922 "driver_specific": {} 00:09:45.922 } 00:09:45.922 ] 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.922 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.923 "name": "Existed_Raid", 00:09:45.923 "uuid": "9c5e6ebf-ea1f-42f7-87f1-3f869d4ad79b", 00:09:45.923 "strip_size_kb": 64, 00:09:45.923 "state": "online", 00:09:45.923 "raid_level": "concat", 00:09:45.923 "superblock": true, 00:09:45.923 "num_base_bdevs": 3, 00:09:45.923 "num_base_bdevs_discovered": 3, 00:09:45.923 "num_base_bdevs_operational": 3, 00:09:45.923 "base_bdevs_list": [ 00:09:45.923 { 00:09:45.923 "name": "BaseBdev1", 00:09:45.923 "uuid": "3871beda-f74c-46a0-98a6-2df80430e61f", 00:09:45.923 "is_configured": true, 00:09:45.923 "data_offset": 2048, 00:09:45.923 "data_size": 63488 00:09:45.923 }, 00:09:45.923 { 00:09:45.923 "name": "BaseBdev2", 00:09:45.923 "uuid": "0c41c186-74bf-45d1-94c6-701d679e795d", 00:09:45.923 "is_configured": true, 00:09:45.923 "data_offset": 2048, 00:09:45.923 "data_size": 63488 00:09:45.923 }, 00:09:45.923 { 00:09:45.923 "name": "BaseBdev3", 00:09:45.923 "uuid": "bfb560de-95dd-4c9a-828b-243bccfcd0a5", 00:09:45.923 "is_configured": true, 00:09:45.923 "data_offset": 2048, 00:09:45.923 "data_size": 63488 00:09:45.923 } 00:09:45.923 ] 00:09:45.923 }' 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.923 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.182 [2024-11-26 20:22:39.685866] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.182 "name": "Existed_Raid", 00:09:46.182 "aliases": [ 00:09:46.182 "9c5e6ebf-ea1f-42f7-87f1-3f869d4ad79b" 00:09:46.182 ], 00:09:46.182 "product_name": "Raid Volume", 00:09:46.182 "block_size": 512, 00:09:46.182 "num_blocks": 190464, 00:09:46.182 "uuid": "9c5e6ebf-ea1f-42f7-87f1-3f869d4ad79b", 00:09:46.182 "assigned_rate_limits": { 00:09:46.182 "rw_ios_per_sec": 0, 00:09:46.182 "rw_mbytes_per_sec": 0, 00:09:46.182 "r_mbytes_per_sec": 0, 00:09:46.182 "w_mbytes_per_sec": 0 00:09:46.182 }, 00:09:46.182 "claimed": false, 00:09:46.182 "zoned": false, 00:09:46.182 "supported_io_types": { 00:09:46.182 "read": true, 00:09:46.182 "write": true, 00:09:46.182 "unmap": true, 00:09:46.182 "flush": true, 00:09:46.182 "reset": true, 00:09:46.182 "nvme_admin": false, 00:09:46.182 "nvme_io": false, 00:09:46.182 "nvme_io_md": false, 00:09:46.182 "write_zeroes": true, 00:09:46.182 "zcopy": false, 00:09:46.182 "get_zone_info": false, 00:09:46.182 "zone_management": false, 00:09:46.182 "zone_append": false, 00:09:46.182 "compare": false, 00:09:46.182 "compare_and_write": false, 00:09:46.182 "abort": false, 00:09:46.182 "seek_hole": false, 00:09:46.182 "seek_data": false, 00:09:46.182 "copy": false, 00:09:46.182 "nvme_iov_md": false 00:09:46.182 }, 00:09:46.182 "memory_domains": [ 00:09:46.182 { 00:09:46.182 "dma_device_id": "system", 00:09:46.182 "dma_device_type": 1 00:09:46.182 }, 00:09:46.182 { 00:09:46.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.182 "dma_device_type": 2 00:09:46.182 }, 00:09:46.182 { 00:09:46.182 "dma_device_id": "system", 00:09:46.182 "dma_device_type": 1 00:09:46.182 }, 00:09:46.182 { 00:09:46.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.182 "dma_device_type": 2 00:09:46.182 }, 00:09:46.182 { 00:09:46.182 "dma_device_id": "system", 00:09:46.182 "dma_device_type": 1 00:09:46.182 }, 00:09:46.182 { 00:09:46.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.182 "dma_device_type": 2 00:09:46.182 } 00:09:46.182 ], 00:09:46.182 "driver_specific": { 00:09:46.182 "raid": { 00:09:46.182 "uuid": "9c5e6ebf-ea1f-42f7-87f1-3f869d4ad79b", 00:09:46.182 "strip_size_kb": 64, 00:09:46.182 "state": "online", 00:09:46.182 "raid_level": "concat", 00:09:46.182 "superblock": true, 00:09:46.182 "num_base_bdevs": 3, 00:09:46.182 "num_base_bdevs_discovered": 3, 00:09:46.182 "num_base_bdevs_operational": 3, 00:09:46.182 "base_bdevs_list": [ 00:09:46.182 { 00:09:46.182 "name": "BaseBdev1", 00:09:46.182 "uuid": "3871beda-f74c-46a0-98a6-2df80430e61f", 00:09:46.182 "is_configured": true, 00:09:46.182 "data_offset": 2048, 00:09:46.182 "data_size": 63488 00:09:46.182 }, 00:09:46.182 { 00:09:46.182 "name": "BaseBdev2", 00:09:46.182 "uuid": "0c41c186-74bf-45d1-94c6-701d679e795d", 00:09:46.182 "is_configured": true, 00:09:46.182 "data_offset": 2048, 00:09:46.182 "data_size": 63488 00:09:46.182 }, 00:09:46.182 { 00:09:46.182 "name": "BaseBdev3", 00:09:46.182 "uuid": "bfb560de-95dd-4c9a-828b-243bccfcd0a5", 00:09:46.182 "is_configured": true, 00:09:46.182 "data_offset": 2048, 00:09:46.182 "data_size": 63488 00:09:46.182 } 00:09:46.182 ] 00:09:46.182 } 00:09:46.182 } 00:09:46.182 }' 00:09:46.182 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.441 BaseBdev2 00:09:46.441 BaseBdev3' 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.441 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.442 [2024-11-26 20:22:39.913253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.442 [2024-11-26 20:22:39.913287] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.442 [2024-11-26 20:22:39.913346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.442 "name": "Existed_Raid", 00:09:46.442 "uuid": "9c5e6ebf-ea1f-42f7-87f1-3f869d4ad79b", 00:09:46.442 "strip_size_kb": 64, 00:09:46.442 "state": "offline", 00:09:46.442 "raid_level": "concat", 00:09:46.442 "superblock": true, 00:09:46.442 "num_base_bdevs": 3, 00:09:46.442 "num_base_bdevs_discovered": 2, 00:09:46.442 "num_base_bdevs_operational": 2, 00:09:46.442 "base_bdevs_list": [ 00:09:46.442 { 00:09:46.442 "name": null, 00:09:46.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.442 "is_configured": false, 00:09:46.442 "data_offset": 0, 00:09:46.442 "data_size": 63488 00:09:46.442 }, 00:09:46.442 { 00:09:46.442 "name": "BaseBdev2", 00:09:46.442 "uuid": "0c41c186-74bf-45d1-94c6-701d679e795d", 00:09:46.442 "is_configured": true, 00:09:46.442 "data_offset": 2048, 00:09:46.442 "data_size": 63488 00:09:46.442 }, 00:09:46.442 { 00:09:46.442 "name": "BaseBdev3", 00:09:46.442 "uuid": "bfb560de-95dd-4c9a-828b-243bccfcd0a5", 00:09:46.442 "is_configured": true, 00:09:46.442 "data_offset": 2048, 00:09:46.442 "data_size": 63488 00:09:46.442 } 00:09:46.442 ] 00:09:46.442 }' 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.442 20:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.015 [2024-11-26 20:22:40.423944] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.015 [2024-11-26 20:22:40.504935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.015 [2024-11-26 20:22:40.505051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.015 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.278 BaseBdev2 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.278 [ 00:09:47.278 { 00:09:47.278 "name": "BaseBdev2", 00:09:47.278 "aliases": [ 00:09:47.278 "50d771e8-d933-4855-b41b-dc1e34928d2f" 00:09:47.278 ], 00:09:47.278 "product_name": "Malloc disk", 00:09:47.278 "block_size": 512, 00:09:47.278 "num_blocks": 65536, 00:09:47.278 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:47.278 "assigned_rate_limits": { 00:09:47.278 "rw_ios_per_sec": 0, 00:09:47.278 "rw_mbytes_per_sec": 0, 00:09:47.278 "r_mbytes_per_sec": 0, 00:09:47.278 "w_mbytes_per_sec": 0 00:09:47.278 }, 00:09:47.278 "claimed": false, 00:09:47.278 "zoned": false, 00:09:47.278 "supported_io_types": { 00:09:47.278 "read": true, 00:09:47.278 "write": true, 00:09:47.278 "unmap": true, 00:09:47.278 "flush": true, 00:09:47.278 "reset": true, 00:09:47.278 "nvme_admin": false, 00:09:47.278 "nvme_io": false, 00:09:47.278 "nvme_io_md": false, 00:09:47.278 "write_zeroes": true, 00:09:47.278 "zcopy": true, 00:09:47.278 "get_zone_info": false, 00:09:47.278 "zone_management": false, 00:09:47.278 "zone_append": false, 00:09:47.278 "compare": false, 00:09:47.278 "compare_and_write": false, 00:09:47.278 "abort": true, 00:09:47.278 "seek_hole": false, 00:09:47.278 "seek_data": false, 00:09:47.278 "copy": true, 00:09:47.278 "nvme_iov_md": false 00:09:47.278 }, 00:09:47.278 "memory_domains": [ 00:09:47.278 { 00:09:47.278 "dma_device_id": "system", 00:09:47.278 "dma_device_type": 1 00:09:47.278 }, 00:09:47.278 { 00:09:47.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.278 "dma_device_type": 2 00:09:47.278 } 00:09:47.278 ], 00:09:47.278 "driver_specific": {} 00:09:47.278 } 00:09:47.278 ] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.278 BaseBdev3 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.278 [ 00:09:47.278 { 00:09:47.278 "name": "BaseBdev3", 00:09:47.278 "aliases": [ 00:09:47.278 "5542a78d-c0a2-454e-9a13-eac0e775c5e5" 00:09:47.278 ], 00:09:47.278 "product_name": "Malloc disk", 00:09:47.278 "block_size": 512, 00:09:47.278 "num_blocks": 65536, 00:09:47.278 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:47.278 "assigned_rate_limits": { 00:09:47.278 "rw_ios_per_sec": 0, 00:09:47.278 "rw_mbytes_per_sec": 0, 00:09:47.278 "r_mbytes_per_sec": 0, 00:09:47.278 "w_mbytes_per_sec": 0 00:09:47.278 }, 00:09:47.278 "claimed": false, 00:09:47.278 "zoned": false, 00:09:47.278 "supported_io_types": { 00:09:47.278 "read": true, 00:09:47.278 "write": true, 00:09:47.278 "unmap": true, 00:09:47.278 "flush": true, 00:09:47.278 "reset": true, 00:09:47.278 "nvme_admin": false, 00:09:47.278 "nvme_io": false, 00:09:47.278 "nvme_io_md": false, 00:09:47.278 "write_zeroes": true, 00:09:47.278 "zcopy": true, 00:09:47.278 "get_zone_info": false, 00:09:47.278 "zone_management": false, 00:09:47.278 "zone_append": false, 00:09:47.278 "compare": false, 00:09:47.278 "compare_and_write": false, 00:09:47.278 "abort": true, 00:09:47.278 "seek_hole": false, 00:09:47.278 "seek_data": false, 00:09:47.278 "copy": true, 00:09:47.278 "nvme_iov_md": false 00:09:47.278 }, 00:09:47.278 "memory_domains": [ 00:09:47.278 { 00:09:47.278 "dma_device_id": "system", 00:09:47.278 "dma_device_type": 1 00:09:47.278 }, 00:09:47.278 { 00:09:47.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.278 "dma_device_type": 2 00:09:47.278 } 00:09:47.278 ], 00:09:47.278 "driver_specific": {} 00:09:47.278 } 00:09:47.278 ] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.278 [2024-11-26 20:22:40.661020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.278 [2024-11-26 20:22:40.661113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.278 [2024-11-26 20:22:40.661158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.278 [2024-11-26 20:22:40.663055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.278 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.279 "name": "Existed_Raid", 00:09:47.279 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:47.279 "strip_size_kb": 64, 00:09:47.279 "state": "configuring", 00:09:47.279 "raid_level": "concat", 00:09:47.279 "superblock": true, 00:09:47.279 "num_base_bdevs": 3, 00:09:47.279 "num_base_bdevs_discovered": 2, 00:09:47.279 "num_base_bdevs_operational": 3, 00:09:47.279 "base_bdevs_list": [ 00:09:47.279 { 00:09:47.279 "name": "BaseBdev1", 00:09:47.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.279 "is_configured": false, 00:09:47.279 "data_offset": 0, 00:09:47.279 "data_size": 0 00:09:47.279 }, 00:09:47.279 { 00:09:47.279 "name": "BaseBdev2", 00:09:47.279 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:47.279 "is_configured": true, 00:09:47.279 "data_offset": 2048, 00:09:47.279 "data_size": 63488 00:09:47.279 }, 00:09:47.279 { 00:09:47.279 "name": "BaseBdev3", 00:09:47.279 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:47.279 "is_configured": true, 00:09:47.279 "data_offset": 2048, 00:09:47.279 "data_size": 63488 00:09:47.279 } 00:09:47.279 ] 00:09:47.279 }' 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.279 20:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.848 [2024-11-26 20:22:41.124265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.848 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.848 "name": "Existed_Raid", 00:09:47.848 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:47.848 "strip_size_kb": 64, 00:09:47.848 "state": "configuring", 00:09:47.848 "raid_level": "concat", 00:09:47.848 "superblock": true, 00:09:47.848 "num_base_bdevs": 3, 00:09:47.848 "num_base_bdevs_discovered": 1, 00:09:47.848 "num_base_bdevs_operational": 3, 00:09:47.848 "base_bdevs_list": [ 00:09:47.848 { 00:09:47.848 "name": "BaseBdev1", 00:09:47.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.848 "is_configured": false, 00:09:47.848 "data_offset": 0, 00:09:47.848 "data_size": 0 00:09:47.848 }, 00:09:47.848 { 00:09:47.849 "name": null, 00:09:47.849 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:47.849 "is_configured": false, 00:09:47.849 "data_offset": 0, 00:09:47.849 "data_size": 63488 00:09:47.849 }, 00:09:47.849 { 00:09:47.849 "name": "BaseBdev3", 00:09:47.849 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:47.849 "is_configured": true, 00:09:47.849 "data_offset": 2048, 00:09:47.849 "data_size": 63488 00:09:47.849 } 00:09:47.849 ] 00:09:47.849 }' 00:09:47.849 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.849 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.108 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.108 [2024-11-26 20:22:41.632821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.108 BaseBdev1 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.109 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.368 [ 00:09:48.368 { 00:09:48.368 "name": "BaseBdev1", 00:09:48.368 "aliases": [ 00:09:48.368 "d5848065-960e-456c-ae80-ccce9b906975" 00:09:48.368 ], 00:09:48.368 "product_name": "Malloc disk", 00:09:48.368 "block_size": 512, 00:09:48.368 "num_blocks": 65536, 00:09:48.368 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:48.368 "assigned_rate_limits": { 00:09:48.368 "rw_ios_per_sec": 0, 00:09:48.368 "rw_mbytes_per_sec": 0, 00:09:48.368 "r_mbytes_per_sec": 0, 00:09:48.368 "w_mbytes_per_sec": 0 00:09:48.368 }, 00:09:48.368 "claimed": true, 00:09:48.368 "claim_type": "exclusive_write", 00:09:48.368 "zoned": false, 00:09:48.368 "supported_io_types": { 00:09:48.368 "read": true, 00:09:48.368 "write": true, 00:09:48.368 "unmap": true, 00:09:48.368 "flush": true, 00:09:48.368 "reset": true, 00:09:48.368 "nvme_admin": false, 00:09:48.368 "nvme_io": false, 00:09:48.368 "nvme_io_md": false, 00:09:48.368 "write_zeroes": true, 00:09:48.368 "zcopy": true, 00:09:48.368 "get_zone_info": false, 00:09:48.368 "zone_management": false, 00:09:48.368 "zone_append": false, 00:09:48.368 "compare": false, 00:09:48.368 "compare_and_write": false, 00:09:48.368 "abort": true, 00:09:48.368 "seek_hole": false, 00:09:48.368 "seek_data": false, 00:09:48.368 "copy": true, 00:09:48.368 "nvme_iov_md": false 00:09:48.368 }, 00:09:48.368 "memory_domains": [ 00:09:48.368 { 00:09:48.368 "dma_device_id": "system", 00:09:48.368 "dma_device_type": 1 00:09:48.368 }, 00:09:48.368 { 00:09:48.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.368 "dma_device_type": 2 00:09:48.368 } 00:09:48.368 ], 00:09:48.368 "driver_specific": {} 00:09:48.368 } 00:09:48.368 ] 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.368 "name": "Existed_Raid", 00:09:48.368 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:48.368 "strip_size_kb": 64, 00:09:48.368 "state": "configuring", 00:09:48.368 "raid_level": "concat", 00:09:48.368 "superblock": true, 00:09:48.368 "num_base_bdevs": 3, 00:09:48.368 "num_base_bdevs_discovered": 2, 00:09:48.368 "num_base_bdevs_operational": 3, 00:09:48.368 "base_bdevs_list": [ 00:09:48.368 { 00:09:48.368 "name": "BaseBdev1", 00:09:48.368 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:48.368 "is_configured": true, 00:09:48.368 "data_offset": 2048, 00:09:48.368 "data_size": 63488 00:09:48.368 }, 00:09:48.368 { 00:09:48.368 "name": null, 00:09:48.368 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:48.368 "is_configured": false, 00:09:48.368 "data_offset": 0, 00:09:48.368 "data_size": 63488 00:09:48.368 }, 00:09:48.368 { 00:09:48.368 "name": "BaseBdev3", 00:09:48.368 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:48.368 "is_configured": true, 00:09:48.368 "data_offset": 2048, 00:09:48.368 "data_size": 63488 00:09:48.368 } 00:09:48.368 ] 00:09:48.368 }' 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.368 20:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.627 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.886 [2024-11-26 20:22:42.176254] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.886 "name": "Existed_Raid", 00:09:48.886 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:48.886 "strip_size_kb": 64, 00:09:48.886 "state": "configuring", 00:09:48.886 "raid_level": "concat", 00:09:48.886 "superblock": true, 00:09:48.886 "num_base_bdevs": 3, 00:09:48.886 "num_base_bdevs_discovered": 1, 00:09:48.886 "num_base_bdevs_operational": 3, 00:09:48.886 "base_bdevs_list": [ 00:09:48.886 { 00:09:48.886 "name": "BaseBdev1", 00:09:48.886 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:48.886 "is_configured": true, 00:09:48.886 "data_offset": 2048, 00:09:48.886 "data_size": 63488 00:09:48.886 }, 00:09:48.886 { 00:09:48.886 "name": null, 00:09:48.886 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:48.886 "is_configured": false, 00:09:48.886 "data_offset": 0, 00:09:48.886 "data_size": 63488 00:09:48.886 }, 00:09:48.886 { 00:09:48.886 "name": null, 00:09:48.886 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:48.886 "is_configured": false, 00:09:48.886 "data_offset": 0, 00:09:48.886 "data_size": 63488 00:09:48.886 } 00:09:48.886 ] 00:09:48.886 }' 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.886 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.145 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.404 [2024-11-26 20:22:42.699411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.404 "name": "Existed_Raid", 00:09:49.404 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:49.404 "strip_size_kb": 64, 00:09:49.404 "state": "configuring", 00:09:49.404 "raid_level": "concat", 00:09:49.404 "superblock": true, 00:09:49.404 "num_base_bdevs": 3, 00:09:49.404 "num_base_bdevs_discovered": 2, 00:09:49.404 "num_base_bdevs_operational": 3, 00:09:49.404 "base_bdevs_list": [ 00:09:49.404 { 00:09:49.404 "name": "BaseBdev1", 00:09:49.404 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:49.404 "is_configured": true, 00:09:49.404 "data_offset": 2048, 00:09:49.404 "data_size": 63488 00:09:49.404 }, 00:09:49.404 { 00:09:49.404 "name": null, 00:09:49.404 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:49.404 "is_configured": false, 00:09:49.404 "data_offset": 0, 00:09:49.404 "data_size": 63488 00:09:49.404 }, 00:09:49.404 { 00:09:49.404 "name": "BaseBdev3", 00:09:49.404 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:49.404 "is_configured": true, 00:09:49.404 "data_offset": 2048, 00:09:49.404 "data_size": 63488 00:09:49.404 } 00:09:49.404 ] 00:09:49.404 }' 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.404 20:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.663 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.663 [2024-11-26 20:22:43.198539] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.921 "name": "Existed_Raid", 00:09:49.921 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:49.921 "strip_size_kb": 64, 00:09:49.921 "state": "configuring", 00:09:49.921 "raid_level": "concat", 00:09:49.921 "superblock": true, 00:09:49.921 "num_base_bdevs": 3, 00:09:49.921 "num_base_bdevs_discovered": 1, 00:09:49.921 "num_base_bdevs_operational": 3, 00:09:49.921 "base_bdevs_list": [ 00:09:49.921 { 00:09:49.921 "name": null, 00:09:49.921 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:49.921 "is_configured": false, 00:09:49.921 "data_offset": 0, 00:09:49.921 "data_size": 63488 00:09:49.921 }, 00:09:49.921 { 00:09:49.921 "name": null, 00:09:49.921 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:49.921 "is_configured": false, 00:09:49.921 "data_offset": 0, 00:09:49.921 "data_size": 63488 00:09:49.921 }, 00:09:49.921 { 00:09:49.921 "name": "BaseBdev3", 00:09:49.921 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:49.921 "is_configured": true, 00:09:49.921 "data_offset": 2048, 00:09:49.921 "data_size": 63488 00:09:49.921 } 00:09:49.921 ] 00:09:49.921 }' 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.921 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.181 [2024-11-26 20:22:43.648184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.181 "name": "Existed_Raid", 00:09:50.181 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:50.181 "strip_size_kb": 64, 00:09:50.181 "state": "configuring", 00:09:50.181 "raid_level": "concat", 00:09:50.181 "superblock": true, 00:09:50.181 "num_base_bdevs": 3, 00:09:50.181 "num_base_bdevs_discovered": 2, 00:09:50.181 "num_base_bdevs_operational": 3, 00:09:50.181 "base_bdevs_list": [ 00:09:50.181 { 00:09:50.181 "name": null, 00:09:50.181 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:50.181 "is_configured": false, 00:09:50.181 "data_offset": 0, 00:09:50.181 "data_size": 63488 00:09:50.181 }, 00:09:50.181 { 00:09:50.181 "name": "BaseBdev2", 00:09:50.181 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:50.181 "is_configured": true, 00:09:50.181 "data_offset": 2048, 00:09:50.181 "data_size": 63488 00:09:50.181 }, 00:09:50.181 { 00:09:50.181 "name": "BaseBdev3", 00:09:50.181 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:50.181 "is_configured": true, 00:09:50.181 "data_offset": 2048, 00:09:50.181 "data_size": 63488 00:09:50.181 } 00:09:50.181 ] 00:09:50.181 }' 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.181 20:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d5848065-960e-456c-ae80-ccce9b906975 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.778 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.778 [2024-11-26 20:22:44.216609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:50.778 [2024-11-26 20:22:44.216909] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:50.778 [2024-11-26 20:22:44.216966] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.778 [2024-11-26 20:22:44.217269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:50.778 NewBaseBdev 00:09:50.778 [2024-11-26 20:22:44.217442] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:50.779 [2024-11-26 20:22:44.217455] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:50.779 [2024-11-26 20:22:44.217576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.779 [ 00:09:50.779 { 00:09:50.779 "name": "NewBaseBdev", 00:09:50.779 "aliases": [ 00:09:50.779 "d5848065-960e-456c-ae80-ccce9b906975" 00:09:50.779 ], 00:09:50.779 "product_name": "Malloc disk", 00:09:50.779 "block_size": 512, 00:09:50.779 "num_blocks": 65536, 00:09:50.779 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:50.779 "assigned_rate_limits": { 00:09:50.779 "rw_ios_per_sec": 0, 00:09:50.779 "rw_mbytes_per_sec": 0, 00:09:50.779 "r_mbytes_per_sec": 0, 00:09:50.779 "w_mbytes_per_sec": 0 00:09:50.779 }, 00:09:50.779 "claimed": true, 00:09:50.779 "claim_type": "exclusive_write", 00:09:50.779 "zoned": false, 00:09:50.779 "supported_io_types": { 00:09:50.779 "read": true, 00:09:50.779 "write": true, 00:09:50.779 "unmap": true, 00:09:50.779 "flush": true, 00:09:50.779 "reset": true, 00:09:50.779 "nvme_admin": false, 00:09:50.779 "nvme_io": false, 00:09:50.779 "nvme_io_md": false, 00:09:50.779 "write_zeroes": true, 00:09:50.779 "zcopy": true, 00:09:50.779 "get_zone_info": false, 00:09:50.779 "zone_management": false, 00:09:50.779 "zone_append": false, 00:09:50.779 "compare": false, 00:09:50.779 "compare_and_write": false, 00:09:50.779 "abort": true, 00:09:50.779 "seek_hole": false, 00:09:50.779 "seek_data": false, 00:09:50.779 "copy": true, 00:09:50.779 "nvme_iov_md": false 00:09:50.779 }, 00:09:50.779 "memory_domains": [ 00:09:50.779 { 00:09:50.779 "dma_device_id": "system", 00:09:50.779 "dma_device_type": 1 00:09:50.779 }, 00:09:50.779 { 00:09:50.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.779 "dma_device_type": 2 00:09:50.779 } 00:09:50.779 ], 00:09:50.779 "driver_specific": {} 00:09:50.779 } 00:09:50.779 ] 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.779 "name": "Existed_Raid", 00:09:50.779 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:50.779 "strip_size_kb": 64, 00:09:50.779 "state": "online", 00:09:50.779 "raid_level": "concat", 00:09:50.779 "superblock": true, 00:09:50.779 "num_base_bdevs": 3, 00:09:50.779 "num_base_bdevs_discovered": 3, 00:09:50.779 "num_base_bdevs_operational": 3, 00:09:50.779 "base_bdevs_list": [ 00:09:50.779 { 00:09:50.779 "name": "NewBaseBdev", 00:09:50.779 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:50.779 "is_configured": true, 00:09:50.779 "data_offset": 2048, 00:09:50.779 "data_size": 63488 00:09:50.779 }, 00:09:50.779 { 00:09:50.779 "name": "BaseBdev2", 00:09:50.779 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:50.779 "is_configured": true, 00:09:50.779 "data_offset": 2048, 00:09:50.779 "data_size": 63488 00:09:50.779 }, 00:09:50.779 { 00:09:50.779 "name": "BaseBdev3", 00:09:50.779 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:50.779 "is_configured": true, 00:09:50.779 "data_offset": 2048, 00:09:50.779 "data_size": 63488 00:09:50.779 } 00:09:50.779 ] 00:09:50.779 }' 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.779 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.347 [2024-11-26 20:22:44.744160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.347 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.347 "name": "Existed_Raid", 00:09:51.347 "aliases": [ 00:09:51.347 "e94028cc-9c37-4c51-821e-8d9465118242" 00:09:51.347 ], 00:09:51.347 "product_name": "Raid Volume", 00:09:51.347 "block_size": 512, 00:09:51.347 "num_blocks": 190464, 00:09:51.347 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:51.347 "assigned_rate_limits": { 00:09:51.347 "rw_ios_per_sec": 0, 00:09:51.347 "rw_mbytes_per_sec": 0, 00:09:51.347 "r_mbytes_per_sec": 0, 00:09:51.347 "w_mbytes_per_sec": 0 00:09:51.347 }, 00:09:51.347 "claimed": false, 00:09:51.347 "zoned": false, 00:09:51.347 "supported_io_types": { 00:09:51.347 "read": true, 00:09:51.347 "write": true, 00:09:51.347 "unmap": true, 00:09:51.347 "flush": true, 00:09:51.347 "reset": true, 00:09:51.347 "nvme_admin": false, 00:09:51.347 "nvme_io": false, 00:09:51.347 "nvme_io_md": false, 00:09:51.347 "write_zeroes": true, 00:09:51.347 "zcopy": false, 00:09:51.347 "get_zone_info": false, 00:09:51.347 "zone_management": false, 00:09:51.347 "zone_append": false, 00:09:51.347 "compare": false, 00:09:51.347 "compare_and_write": false, 00:09:51.347 "abort": false, 00:09:51.347 "seek_hole": false, 00:09:51.347 "seek_data": false, 00:09:51.347 "copy": false, 00:09:51.347 "nvme_iov_md": false 00:09:51.347 }, 00:09:51.347 "memory_domains": [ 00:09:51.347 { 00:09:51.347 "dma_device_id": "system", 00:09:51.347 "dma_device_type": 1 00:09:51.347 }, 00:09:51.347 { 00:09:51.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.347 "dma_device_type": 2 00:09:51.347 }, 00:09:51.347 { 00:09:51.347 "dma_device_id": "system", 00:09:51.347 "dma_device_type": 1 00:09:51.347 }, 00:09:51.347 { 00:09:51.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.347 "dma_device_type": 2 00:09:51.347 }, 00:09:51.347 { 00:09:51.347 "dma_device_id": "system", 00:09:51.347 "dma_device_type": 1 00:09:51.347 }, 00:09:51.347 { 00:09:51.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.347 "dma_device_type": 2 00:09:51.347 } 00:09:51.347 ], 00:09:51.347 "driver_specific": { 00:09:51.347 "raid": { 00:09:51.347 "uuid": "e94028cc-9c37-4c51-821e-8d9465118242", 00:09:51.347 "strip_size_kb": 64, 00:09:51.347 "state": "online", 00:09:51.347 "raid_level": "concat", 00:09:51.347 "superblock": true, 00:09:51.348 "num_base_bdevs": 3, 00:09:51.348 "num_base_bdevs_discovered": 3, 00:09:51.348 "num_base_bdevs_operational": 3, 00:09:51.348 "base_bdevs_list": [ 00:09:51.348 { 00:09:51.348 "name": "NewBaseBdev", 00:09:51.348 "uuid": "d5848065-960e-456c-ae80-ccce9b906975", 00:09:51.348 "is_configured": true, 00:09:51.348 "data_offset": 2048, 00:09:51.348 "data_size": 63488 00:09:51.348 }, 00:09:51.348 { 00:09:51.348 "name": "BaseBdev2", 00:09:51.348 "uuid": "50d771e8-d933-4855-b41b-dc1e34928d2f", 00:09:51.348 "is_configured": true, 00:09:51.348 "data_offset": 2048, 00:09:51.348 "data_size": 63488 00:09:51.348 }, 00:09:51.348 { 00:09:51.348 "name": "BaseBdev3", 00:09:51.348 "uuid": "5542a78d-c0a2-454e-9a13-eac0e775c5e5", 00:09:51.348 "is_configured": true, 00:09:51.348 "data_offset": 2048, 00:09:51.348 "data_size": 63488 00:09:51.348 } 00:09:51.348 ] 00:09:51.348 } 00:09:51.348 } 00:09:51.348 }' 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:51.348 BaseBdev2 00:09:51.348 BaseBdev3' 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.348 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.606 20:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.606 [2024-11-26 20:22:45.031350] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.606 [2024-11-26 20:22:45.031386] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.606 [2024-11-26 20:22:45.031490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.606 [2024-11-26 20:22:45.031549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.606 [2024-11-26 20:22:45.031571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77772 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77772 ']' 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77772 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77772 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77772' 00:09:51.606 killing process with pid 77772 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77772 00:09:51.606 [2024-11-26 20:22:45.082669] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.606 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77772 00:09:51.606 [2024-11-26 20:22:45.132353] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.175 20:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:52.175 00:09:52.175 real 0m9.255s 00:09:52.175 user 0m15.549s 00:09:52.175 sys 0m1.976s 00:09:52.175 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.175 20:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.175 ************************************ 00:09:52.175 END TEST raid_state_function_test_sb 00:09:52.175 ************************************ 00:09:52.175 20:22:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:52.175 20:22:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:52.175 20:22:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.175 20:22:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.175 ************************************ 00:09:52.175 START TEST raid_superblock_test 00:09:52.175 ************************************ 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:52.175 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78376 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78376 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78376 ']' 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.176 20:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.176 [2024-11-26 20:22:45.691598] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.176 [2024-11-26 20:22:45.691807] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78376 ] 00:09:52.435 [2024-11-26 20:22:45.863112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.435 [2024-11-26 20:22:45.944357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.695 [2024-11-26 20:22:46.021065] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.695 [2024-11-26 20:22:46.021197] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.264 malloc1 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.264 [2024-11-26 20:22:46.606274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:53.264 [2024-11-26 20:22:46.606408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.264 [2024-11-26 20:22:46.606451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:53.264 [2024-11-26 20:22:46.606489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.264 [2024-11-26 20:22:46.608957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.264 [2024-11-26 20:22:46.609071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:53.264 pt1 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.264 malloc2 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.264 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.264 [2024-11-26 20:22:46.653338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:53.264 [2024-11-26 20:22:46.653420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.265 [2024-11-26 20:22:46.653459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:53.265 [2024-11-26 20:22:46.653471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.265 [2024-11-26 20:22:46.655780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.265 [2024-11-26 20:22:46.655875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:53.265 pt2 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.265 malloc3 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.265 [2024-11-26 20:22:46.684856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:53.265 [2024-11-26 20:22:46.684966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.265 [2024-11-26 20:22:46.685026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:53.265 [2024-11-26 20:22:46.685063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.265 [2024-11-26 20:22:46.687367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.265 [2024-11-26 20:22:46.687444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:53.265 pt3 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.265 [2024-11-26 20:22:46.696899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:53.265 [2024-11-26 20:22:46.699130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:53.265 [2024-11-26 20:22:46.699253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:53.265 [2024-11-26 20:22:46.699458] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:53.265 [2024-11-26 20:22:46.699521] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:53.265 [2024-11-26 20:22:46.699900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:53.265 [2024-11-26 20:22:46.700117] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:53.265 [2024-11-26 20:22:46.700175] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:53.265 [2024-11-26 20:22:46.700435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.265 "name": "raid_bdev1", 00:09:53.265 "uuid": "ae2f821c-7d65-4ada-ab1a-60ed179e2299", 00:09:53.265 "strip_size_kb": 64, 00:09:53.265 "state": "online", 00:09:53.265 "raid_level": "concat", 00:09:53.265 "superblock": true, 00:09:53.265 "num_base_bdevs": 3, 00:09:53.265 "num_base_bdevs_discovered": 3, 00:09:53.265 "num_base_bdevs_operational": 3, 00:09:53.265 "base_bdevs_list": [ 00:09:53.265 { 00:09:53.265 "name": "pt1", 00:09:53.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.265 "is_configured": true, 00:09:53.265 "data_offset": 2048, 00:09:53.265 "data_size": 63488 00:09:53.265 }, 00:09:53.265 { 00:09:53.265 "name": "pt2", 00:09:53.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.265 "is_configured": true, 00:09:53.265 "data_offset": 2048, 00:09:53.265 "data_size": 63488 00:09:53.265 }, 00:09:53.265 { 00:09:53.265 "name": "pt3", 00:09:53.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.265 "is_configured": true, 00:09:53.265 "data_offset": 2048, 00:09:53.265 "data_size": 63488 00:09:53.265 } 00:09:53.265 ] 00:09:53.265 }' 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.265 20:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.835 [2024-11-26 20:22:47.132549] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.835 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.835 "name": "raid_bdev1", 00:09:53.835 "aliases": [ 00:09:53.835 "ae2f821c-7d65-4ada-ab1a-60ed179e2299" 00:09:53.835 ], 00:09:53.835 "product_name": "Raid Volume", 00:09:53.835 "block_size": 512, 00:09:53.835 "num_blocks": 190464, 00:09:53.835 "uuid": "ae2f821c-7d65-4ada-ab1a-60ed179e2299", 00:09:53.835 "assigned_rate_limits": { 00:09:53.835 "rw_ios_per_sec": 0, 00:09:53.835 "rw_mbytes_per_sec": 0, 00:09:53.835 "r_mbytes_per_sec": 0, 00:09:53.836 "w_mbytes_per_sec": 0 00:09:53.836 }, 00:09:53.836 "claimed": false, 00:09:53.836 "zoned": false, 00:09:53.836 "supported_io_types": { 00:09:53.836 "read": true, 00:09:53.836 "write": true, 00:09:53.836 "unmap": true, 00:09:53.836 "flush": true, 00:09:53.836 "reset": true, 00:09:53.836 "nvme_admin": false, 00:09:53.836 "nvme_io": false, 00:09:53.836 "nvme_io_md": false, 00:09:53.836 "write_zeroes": true, 00:09:53.836 "zcopy": false, 00:09:53.836 "get_zone_info": false, 00:09:53.836 "zone_management": false, 00:09:53.836 "zone_append": false, 00:09:53.836 "compare": false, 00:09:53.836 "compare_and_write": false, 00:09:53.836 "abort": false, 00:09:53.836 "seek_hole": false, 00:09:53.836 "seek_data": false, 00:09:53.836 "copy": false, 00:09:53.836 "nvme_iov_md": false 00:09:53.836 }, 00:09:53.836 "memory_domains": [ 00:09:53.836 { 00:09:53.836 "dma_device_id": "system", 00:09:53.836 "dma_device_type": 1 00:09:53.836 }, 00:09:53.836 { 00:09:53.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.836 "dma_device_type": 2 00:09:53.836 }, 00:09:53.836 { 00:09:53.836 "dma_device_id": "system", 00:09:53.836 "dma_device_type": 1 00:09:53.836 }, 00:09:53.836 { 00:09:53.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.836 "dma_device_type": 2 00:09:53.836 }, 00:09:53.836 { 00:09:53.836 "dma_device_id": "system", 00:09:53.836 "dma_device_type": 1 00:09:53.836 }, 00:09:53.836 { 00:09:53.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.836 "dma_device_type": 2 00:09:53.836 } 00:09:53.836 ], 00:09:53.836 "driver_specific": { 00:09:53.836 "raid": { 00:09:53.836 "uuid": "ae2f821c-7d65-4ada-ab1a-60ed179e2299", 00:09:53.836 "strip_size_kb": 64, 00:09:53.836 "state": "online", 00:09:53.836 "raid_level": "concat", 00:09:53.836 "superblock": true, 00:09:53.836 "num_base_bdevs": 3, 00:09:53.836 "num_base_bdevs_discovered": 3, 00:09:53.836 "num_base_bdevs_operational": 3, 00:09:53.836 "base_bdevs_list": [ 00:09:53.836 { 00:09:53.836 "name": "pt1", 00:09:53.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.836 "is_configured": true, 00:09:53.836 "data_offset": 2048, 00:09:53.836 "data_size": 63488 00:09:53.836 }, 00:09:53.836 { 00:09:53.836 "name": "pt2", 00:09:53.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.836 "is_configured": true, 00:09:53.836 "data_offset": 2048, 00:09:53.836 "data_size": 63488 00:09:53.836 }, 00:09:53.836 { 00:09:53.836 "name": "pt3", 00:09:53.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.836 "is_configured": true, 00:09:53.836 "data_offset": 2048, 00:09:53.836 "data_size": 63488 00:09:53.836 } 00:09:53.836 ] 00:09:53.836 } 00:09:53.836 } 00:09:53.836 }' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:53.836 pt2 00:09:53.836 pt3' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:53.836 [2024-11-26 20:22:47.372053] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.836 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ae2f821c-7d65-4ada-ab1a-60ed179e2299 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ae2f821c-7d65-4ada-ab1a-60ed179e2299 ']' 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.097 [2024-11-26 20:22:47.399734] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.097 [2024-11-26 20:22:47.399765] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.097 [2024-11-26 20:22:47.399860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.097 [2024-11-26 20:22:47.399932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.097 [2024-11-26 20:22:47.399949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.097 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.097 [2024-11-26 20:22:47.531518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:54.097 [2024-11-26 20:22:47.533710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:54.097 [2024-11-26 20:22:47.533768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:54.097 [2024-11-26 20:22:47.533832] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:54.097 [2024-11-26 20:22:47.533887] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:54.097 [2024-11-26 20:22:47.533911] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:54.097 [2024-11-26 20:22:47.533925] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.097 [2024-11-26 20:22:47.533935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:54.097 request: 00:09:54.097 { 00:09:54.097 "name": "raid_bdev1", 00:09:54.097 "raid_level": "concat", 00:09:54.097 "base_bdevs": [ 00:09:54.097 "malloc1", 00:09:54.097 "malloc2", 00:09:54.097 "malloc3" 00:09:54.097 ], 00:09:54.097 "strip_size_kb": 64, 00:09:54.097 "superblock": false, 00:09:54.097 "method": "bdev_raid_create", 00:09:54.097 "req_id": 1 00:09:54.097 } 00:09:54.097 Got JSON-RPC error response 00:09:54.097 response: 00:09:54.097 { 00:09:54.097 "code": -17, 00:09:54.098 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:54.098 } 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.098 [2024-11-26 20:22:47.583433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.098 [2024-11-26 20:22:47.583595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.098 [2024-11-26 20:22:47.583679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:54.098 [2024-11-26 20:22:47.583730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.098 [2024-11-26 20:22:47.586221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.098 [2024-11-26 20:22:47.586313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.098 [2024-11-26 20:22:47.586446] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:54.098 [2024-11-26 20:22:47.586546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.098 pt1 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.098 "name": "raid_bdev1", 00:09:54.098 "uuid": "ae2f821c-7d65-4ada-ab1a-60ed179e2299", 00:09:54.098 "strip_size_kb": 64, 00:09:54.098 "state": "configuring", 00:09:54.098 "raid_level": "concat", 00:09:54.098 "superblock": true, 00:09:54.098 "num_base_bdevs": 3, 00:09:54.098 "num_base_bdevs_discovered": 1, 00:09:54.098 "num_base_bdevs_operational": 3, 00:09:54.098 "base_bdevs_list": [ 00:09:54.098 { 00:09:54.098 "name": "pt1", 00:09:54.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.098 "is_configured": true, 00:09:54.098 "data_offset": 2048, 00:09:54.098 "data_size": 63488 00:09:54.098 }, 00:09:54.098 { 00:09:54.098 "name": null, 00:09:54.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.098 "is_configured": false, 00:09:54.098 "data_offset": 2048, 00:09:54.098 "data_size": 63488 00:09:54.098 }, 00:09:54.098 { 00:09:54.098 "name": null, 00:09:54.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.098 "is_configured": false, 00:09:54.098 "data_offset": 2048, 00:09:54.098 "data_size": 63488 00:09:54.098 } 00:09:54.098 ] 00:09:54.098 }' 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.098 20:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 [2024-11-26 20:22:48.014702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.667 [2024-11-26 20:22:48.014847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.667 [2024-11-26 20:22:48.014889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:54.667 [2024-11-26 20:22:48.014940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.667 [2024-11-26 20:22:48.015398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.667 [2024-11-26 20:22:48.015467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.667 [2024-11-26 20:22:48.015594] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:54.667 [2024-11-26 20:22:48.015689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.667 pt2 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 [2024-11-26 20:22:48.022690] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.667 "name": "raid_bdev1", 00:09:54.667 "uuid": "ae2f821c-7d65-4ada-ab1a-60ed179e2299", 00:09:54.667 "strip_size_kb": 64, 00:09:54.667 "state": "configuring", 00:09:54.667 "raid_level": "concat", 00:09:54.667 "superblock": true, 00:09:54.667 "num_base_bdevs": 3, 00:09:54.667 "num_base_bdevs_discovered": 1, 00:09:54.667 "num_base_bdevs_operational": 3, 00:09:54.667 "base_bdevs_list": [ 00:09:54.667 { 00:09:54.667 "name": "pt1", 00:09:54.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.667 "is_configured": true, 00:09:54.667 "data_offset": 2048, 00:09:54.667 "data_size": 63488 00:09:54.667 }, 00:09:54.667 { 00:09:54.667 "name": null, 00:09:54.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.667 "is_configured": false, 00:09:54.667 "data_offset": 0, 00:09:54.667 "data_size": 63488 00:09:54.667 }, 00:09:54.667 { 00:09:54.667 "name": null, 00:09:54.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.667 "is_configured": false, 00:09:54.667 "data_offset": 2048, 00:09:54.667 "data_size": 63488 00:09:54.667 } 00:09:54.667 ] 00:09:54.667 }' 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.667 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.927 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:54.927 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.927 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.927 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.927 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.928 [2024-11-26 20:22:48.414022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.928 [2024-11-26 20:22:48.414181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.928 [2024-11-26 20:22:48.414225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:54.928 [2024-11-26 20:22:48.414292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.928 [2024-11-26 20:22:48.414844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.928 [2024-11-26 20:22:48.414912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.928 [2024-11-26 20:22:48.415047] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:54.928 [2024-11-26 20:22:48.415113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.928 pt2 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.928 [2024-11-26 20:22:48.425972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.928 [2024-11-26 20:22:48.426036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.928 [2024-11-26 20:22:48.426061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:54.928 [2024-11-26 20:22:48.426071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.928 [2024-11-26 20:22:48.426470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.928 [2024-11-26 20:22:48.426487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.928 [2024-11-26 20:22:48.426564] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:54.928 [2024-11-26 20:22:48.426585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.928 [2024-11-26 20:22:48.426738] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:54.928 [2024-11-26 20:22:48.426749] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:54.928 [2024-11-26 20:22:48.426996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:54.928 [2024-11-26 20:22:48.427109] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:54.928 [2024-11-26 20:22:48.427127] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:54.928 [2024-11-26 20:22:48.427231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.928 pt3 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.928 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.188 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.188 "name": "raid_bdev1", 00:09:55.188 "uuid": "ae2f821c-7d65-4ada-ab1a-60ed179e2299", 00:09:55.188 "strip_size_kb": 64, 00:09:55.188 "state": "online", 00:09:55.188 "raid_level": "concat", 00:09:55.188 "superblock": true, 00:09:55.188 "num_base_bdevs": 3, 00:09:55.188 "num_base_bdevs_discovered": 3, 00:09:55.188 "num_base_bdevs_operational": 3, 00:09:55.188 "base_bdevs_list": [ 00:09:55.188 { 00:09:55.188 "name": "pt1", 00:09:55.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.188 "is_configured": true, 00:09:55.188 "data_offset": 2048, 00:09:55.188 "data_size": 63488 00:09:55.188 }, 00:09:55.188 { 00:09:55.188 "name": "pt2", 00:09:55.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.188 "is_configured": true, 00:09:55.188 "data_offset": 2048, 00:09:55.188 "data_size": 63488 00:09:55.188 }, 00:09:55.188 { 00:09:55.188 "name": "pt3", 00:09:55.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.188 "is_configured": true, 00:09:55.188 "data_offset": 2048, 00:09:55.188 "data_size": 63488 00:09:55.188 } 00:09:55.188 ] 00:09:55.188 }' 00:09:55.188 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.188 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.448 [2024-11-26 20:22:48.869558] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.448 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.448 "name": "raid_bdev1", 00:09:55.448 "aliases": [ 00:09:55.448 "ae2f821c-7d65-4ada-ab1a-60ed179e2299" 00:09:55.448 ], 00:09:55.448 "product_name": "Raid Volume", 00:09:55.448 "block_size": 512, 00:09:55.448 "num_blocks": 190464, 00:09:55.448 "uuid": "ae2f821c-7d65-4ada-ab1a-60ed179e2299", 00:09:55.448 "assigned_rate_limits": { 00:09:55.448 "rw_ios_per_sec": 0, 00:09:55.448 "rw_mbytes_per_sec": 0, 00:09:55.448 "r_mbytes_per_sec": 0, 00:09:55.448 "w_mbytes_per_sec": 0 00:09:55.448 }, 00:09:55.448 "claimed": false, 00:09:55.448 "zoned": false, 00:09:55.448 "supported_io_types": { 00:09:55.448 "read": true, 00:09:55.448 "write": true, 00:09:55.448 "unmap": true, 00:09:55.448 "flush": true, 00:09:55.448 "reset": true, 00:09:55.448 "nvme_admin": false, 00:09:55.448 "nvme_io": false, 00:09:55.448 "nvme_io_md": false, 00:09:55.448 "write_zeroes": true, 00:09:55.448 "zcopy": false, 00:09:55.448 "get_zone_info": false, 00:09:55.448 "zone_management": false, 00:09:55.448 "zone_append": false, 00:09:55.448 "compare": false, 00:09:55.448 "compare_and_write": false, 00:09:55.448 "abort": false, 00:09:55.448 "seek_hole": false, 00:09:55.448 "seek_data": false, 00:09:55.448 "copy": false, 00:09:55.448 "nvme_iov_md": false 00:09:55.448 }, 00:09:55.448 "memory_domains": [ 00:09:55.448 { 00:09:55.448 "dma_device_id": "system", 00:09:55.448 "dma_device_type": 1 00:09:55.448 }, 00:09:55.448 { 00:09:55.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.448 "dma_device_type": 2 00:09:55.448 }, 00:09:55.449 { 00:09:55.449 "dma_device_id": "system", 00:09:55.449 "dma_device_type": 1 00:09:55.449 }, 00:09:55.449 { 00:09:55.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.449 "dma_device_type": 2 00:09:55.449 }, 00:09:55.449 { 00:09:55.449 "dma_device_id": "system", 00:09:55.449 "dma_device_type": 1 00:09:55.449 }, 00:09:55.449 { 00:09:55.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.449 "dma_device_type": 2 00:09:55.449 } 00:09:55.449 ], 00:09:55.449 "driver_specific": { 00:09:55.449 "raid": { 00:09:55.449 "uuid": "ae2f821c-7d65-4ada-ab1a-60ed179e2299", 00:09:55.449 "strip_size_kb": 64, 00:09:55.449 "state": "online", 00:09:55.449 "raid_level": "concat", 00:09:55.449 "superblock": true, 00:09:55.449 "num_base_bdevs": 3, 00:09:55.449 "num_base_bdevs_discovered": 3, 00:09:55.449 "num_base_bdevs_operational": 3, 00:09:55.449 "base_bdevs_list": [ 00:09:55.449 { 00:09:55.449 "name": "pt1", 00:09:55.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.449 "is_configured": true, 00:09:55.449 "data_offset": 2048, 00:09:55.449 "data_size": 63488 00:09:55.449 }, 00:09:55.449 { 00:09:55.449 "name": "pt2", 00:09:55.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.449 "is_configured": true, 00:09:55.449 "data_offset": 2048, 00:09:55.449 "data_size": 63488 00:09:55.449 }, 00:09:55.449 { 00:09:55.449 "name": "pt3", 00:09:55.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.449 "is_configured": true, 00:09:55.449 "data_offset": 2048, 00:09:55.449 "data_size": 63488 00:09:55.449 } 00:09:55.449 ] 00:09:55.449 } 00:09:55.449 } 00:09:55.449 }' 00:09:55.449 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.449 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:55.449 pt2 00:09:55.449 pt3' 00:09:55.449 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.449 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.449 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.708 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:55.708 20:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.708 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.708 20:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.708 [2024-11-26 20:22:49.141115] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ae2f821c-7d65-4ada-ab1a-60ed179e2299 '!=' ae2f821c-7d65-4ada-ab1a-60ed179e2299 ']' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78376 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78376 ']' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78376 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78376 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78376' 00:09:55.708 killing process with pid 78376 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78376 00:09:55.708 [2024-11-26 20:22:49.230316] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.708 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78376 00:09:55.708 [2024-11-26 20:22:49.230506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.708 [2024-11-26 20:22:49.230585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.708 [2024-11-26 20:22:49.230601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:55.967 [2024-11-26 20:22:49.281497] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.227 20:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:56.227 00:09:56.227 real 0m4.055s 00:09:56.227 user 0m6.188s 00:09:56.227 sys 0m0.930s 00:09:56.227 ************************************ 00:09:56.227 END TEST raid_superblock_test 00:09:56.227 ************************************ 00:09:56.227 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.227 20:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.227 20:22:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:56.227 20:22:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:56.227 20:22:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.227 20:22:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.227 ************************************ 00:09:56.227 START TEST raid_read_error_test 00:09:56.227 ************************************ 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tXvqpzDYMl 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78618 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78618 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78618 ']' 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.227 20:22:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.486 [2024-11-26 20:22:49.818283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:56.486 [2024-11-26 20:22:49.818444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78618 ] 00:09:56.486 [2024-11-26 20:22:49.978560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.745 [2024-11-26 20:22:50.076514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.745 [2024-11-26 20:22:50.153551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.745 [2024-11-26 20:22:50.153592] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.314 BaseBdev1_malloc 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.314 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 true 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 [2024-11-26 20:22:50.735065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.315 [2024-11-26 20:22:50.735128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.315 [2024-11-26 20:22:50.735149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:57.315 [2024-11-26 20:22:50.735167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.315 [2024-11-26 20:22:50.737662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.315 [2024-11-26 20:22:50.737704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.315 BaseBdev1 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 BaseBdev2_malloc 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 true 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 [2024-11-26 20:22:50.790225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.315 [2024-11-26 20:22:50.790289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.315 [2024-11-26 20:22:50.790312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:57.315 [2024-11-26 20:22:50.790322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.315 [2024-11-26 20:22:50.792756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.315 [2024-11-26 20:22:50.792796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.315 BaseBdev2 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 BaseBdev3_malloc 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 true 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 [2024-11-26 20:22:50.833734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:57.315 [2024-11-26 20:22:50.833793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.315 [2024-11-26 20:22:50.833814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:57.315 [2024-11-26 20:22:50.833825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.315 [2024-11-26 20:22:50.836114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.315 [2024-11-26 20:22:50.836226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:57.315 BaseBdev3 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.315 [2024-11-26 20:22:50.845834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.315 [2024-11-26 20:22:50.847958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.315 [2024-11-26 20:22:50.848049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.315 [2024-11-26 20:22:50.848244] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:57.315 [2024-11-26 20:22:50.848263] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:57.315 [2024-11-26 20:22:50.848551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:57.315 [2024-11-26 20:22:50.848730] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:57.315 [2024-11-26 20:22:50.848744] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:57.315 [2024-11-26 20:22:50.848894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.315 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.574 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.574 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.574 "name": "raid_bdev1", 00:09:57.574 "uuid": "effe5372-2b6b-4292-a127-85cc7cf27797", 00:09:57.574 "strip_size_kb": 64, 00:09:57.574 "state": "online", 00:09:57.574 "raid_level": "concat", 00:09:57.574 "superblock": true, 00:09:57.574 "num_base_bdevs": 3, 00:09:57.574 "num_base_bdevs_discovered": 3, 00:09:57.574 "num_base_bdevs_operational": 3, 00:09:57.574 "base_bdevs_list": [ 00:09:57.574 { 00:09:57.574 "name": "BaseBdev1", 00:09:57.574 "uuid": "1e8bca6c-a61d-5697-8b9c-911dee8e8a1d", 00:09:57.574 "is_configured": true, 00:09:57.574 "data_offset": 2048, 00:09:57.574 "data_size": 63488 00:09:57.574 }, 00:09:57.574 { 00:09:57.574 "name": "BaseBdev2", 00:09:57.575 "uuid": "15a9af0e-0a68-5d06-9960-45decc989d56", 00:09:57.575 "is_configured": true, 00:09:57.575 "data_offset": 2048, 00:09:57.575 "data_size": 63488 00:09:57.575 }, 00:09:57.575 { 00:09:57.575 "name": "BaseBdev3", 00:09:57.575 "uuid": "4a7902f8-a519-50c9-a8b4-4de813ba78f0", 00:09:57.575 "is_configured": true, 00:09:57.575 "data_offset": 2048, 00:09:57.575 "data_size": 63488 00:09:57.575 } 00:09:57.575 ] 00:09:57.575 }' 00:09:57.575 20:22:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.575 20:22:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.834 20:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:57.834 20:22:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:57.834 [2024-11-26 20:22:51.381375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.836 "name": "raid_bdev1", 00:09:58.836 "uuid": "effe5372-2b6b-4292-a127-85cc7cf27797", 00:09:58.836 "strip_size_kb": 64, 00:09:58.836 "state": "online", 00:09:58.836 "raid_level": "concat", 00:09:58.836 "superblock": true, 00:09:58.836 "num_base_bdevs": 3, 00:09:58.836 "num_base_bdevs_discovered": 3, 00:09:58.836 "num_base_bdevs_operational": 3, 00:09:58.836 "base_bdevs_list": [ 00:09:58.836 { 00:09:58.836 "name": "BaseBdev1", 00:09:58.836 "uuid": "1e8bca6c-a61d-5697-8b9c-911dee8e8a1d", 00:09:58.836 "is_configured": true, 00:09:58.836 "data_offset": 2048, 00:09:58.836 "data_size": 63488 00:09:58.836 }, 00:09:58.836 { 00:09:58.836 "name": "BaseBdev2", 00:09:58.836 "uuid": "15a9af0e-0a68-5d06-9960-45decc989d56", 00:09:58.836 "is_configured": true, 00:09:58.836 "data_offset": 2048, 00:09:58.836 "data_size": 63488 00:09:58.836 }, 00:09:58.836 { 00:09:58.836 "name": "BaseBdev3", 00:09:58.836 "uuid": "4a7902f8-a519-50c9-a8b4-4de813ba78f0", 00:09:58.836 "is_configured": true, 00:09:58.836 "data_offset": 2048, 00:09:58.836 "data_size": 63488 00:09:58.836 } 00:09:58.836 ] 00:09:58.836 }' 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.836 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.403 [2024-11-26 20:22:52.758600] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.403 [2024-11-26 20:22:52.758656] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.403 [2024-11-26 20:22:52.761700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.403 [2024-11-26 20:22:52.761761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.403 [2024-11-26 20:22:52.761801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.403 [2024-11-26 20:22:52.761813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:59.403 { 00:09:59.403 "results": [ 00:09:59.403 { 00:09:59.403 "job": "raid_bdev1", 00:09:59.403 "core_mask": "0x1", 00:09:59.403 "workload": "randrw", 00:09:59.403 "percentage": 50, 00:09:59.403 "status": "finished", 00:09:59.403 "queue_depth": 1, 00:09:59.403 "io_size": 131072, 00:09:59.403 "runtime": 1.377805, 00:09:59.403 "iops": 14635.597925686146, 00:09:59.403 "mibps": 1829.4497407107683, 00:09:59.403 "io_failed": 1, 00:09:59.403 "io_timeout": 0, 00:09:59.403 "avg_latency_us": 95.3149054983376, 00:09:59.403 "min_latency_us": 26.1589519650655, 00:09:59.403 "max_latency_us": 1702.7912663755458 00:09:59.403 } 00:09:59.403 ], 00:09:59.403 "core_count": 1 00:09:59.403 } 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78618 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78618 ']' 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78618 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78618 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78618' 00:09:59.403 killing process with pid 78618 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78618 00:09:59.403 [2024-11-26 20:22:52.810668] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.403 20:22:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78618 00:09:59.403 [2024-11-26 20:22:52.852306] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.662 20:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tXvqpzDYMl 00:09:59.662 20:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.662 20:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.921 20:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:59.921 20:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:59.921 20:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.921 20:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.921 20:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:59.921 00:09:59.921 real 0m3.519s 00:09:59.921 user 0m4.376s 00:09:59.921 sys 0m0.642s 00:09:59.921 20:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.921 20:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.921 ************************************ 00:09:59.921 END TEST raid_read_error_test 00:09:59.921 ************************************ 00:09:59.921 20:22:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:59.921 20:22:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:59.921 20:22:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.921 20:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.921 ************************************ 00:09:59.921 START TEST raid_write_error_test 00:09:59.921 ************************************ 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LMyyYMQ0BE 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78758 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78758 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78758 ']' 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.922 20:22:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.922 [2024-11-26 20:22:53.398327] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:59.922 [2024-11-26 20:22:53.398454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78758 ] 00:10:00.203 [2024-11-26 20:22:53.559391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.203 [2024-11-26 20:22:53.637104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.203 [2024-11-26 20:22:53.710398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.203 [2024-11-26 20:22:53.710528] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.773 BaseBdev1_malloc 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.773 true 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.773 [2024-11-26 20:22:54.280010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:00.773 [2024-11-26 20:22:54.280096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.773 [2024-11-26 20:22:54.280123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:00.773 [2024-11-26 20:22:54.280133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.773 [2024-11-26 20:22:54.282589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.773 [2024-11-26 20:22:54.282689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:00.773 BaseBdev1 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.773 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.773 BaseBdev2_malloc 00:10:00.774 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.774 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:00.774 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.774 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.032 true 00:10:01.032 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.032 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 [2024-11-26 20:22:54.332299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:01.033 [2024-11-26 20:22:54.332474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.033 [2024-11-26 20:22:54.332505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:01.033 [2024-11-26 20:22:54.332517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.033 [2024-11-26 20:22:54.335057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.033 [2024-11-26 20:22:54.335098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:01.033 BaseBdev2 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 BaseBdev3_malloc 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 true 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 [2024-11-26 20:22:54.379013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:01.033 [2024-11-26 20:22:54.379073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.033 [2024-11-26 20:22:54.379096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:01.033 [2024-11-26 20:22:54.379106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.033 [2024-11-26 20:22:54.381460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.033 [2024-11-26 20:22:54.381504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:01.033 BaseBdev3 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 [2024-11-26 20:22:54.391129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.033 [2024-11-26 20:22:54.393300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.033 [2024-11-26 20:22:54.393467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.033 [2024-11-26 20:22:54.393705] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:01.033 [2024-11-26 20:22:54.393724] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:01.033 [2024-11-26 20:22:54.394040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:01.033 [2024-11-26 20:22:54.394204] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:01.033 [2024-11-26 20:22:54.394215] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:01.033 [2024-11-26 20:22:54.394393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.033 "name": "raid_bdev1", 00:10:01.033 "uuid": "52fd0db9-1410-449d-a965-785f5a9dac02", 00:10:01.033 "strip_size_kb": 64, 00:10:01.033 "state": "online", 00:10:01.033 "raid_level": "concat", 00:10:01.033 "superblock": true, 00:10:01.033 "num_base_bdevs": 3, 00:10:01.033 "num_base_bdevs_discovered": 3, 00:10:01.033 "num_base_bdevs_operational": 3, 00:10:01.033 "base_bdevs_list": [ 00:10:01.033 { 00:10:01.033 "name": "BaseBdev1", 00:10:01.033 "uuid": "c56de1e6-5184-5ad5-aa91-93d3dafd7cf9", 00:10:01.033 "is_configured": true, 00:10:01.033 "data_offset": 2048, 00:10:01.033 "data_size": 63488 00:10:01.033 }, 00:10:01.033 { 00:10:01.033 "name": "BaseBdev2", 00:10:01.033 "uuid": "e0c33274-dcd9-5a32-9bda-ff81dd9615c3", 00:10:01.033 "is_configured": true, 00:10:01.033 "data_offset": 2048, 00:10:01.033 "data_size": 63488 00:10:01.033 }, 00:10:01.033 { 00:10:01.033 "name": "BaseBdev3", 00:10:01.033 "uuid": "9e81a0f3-a117-5263-b85e-d2a0f56c93cd", 00:10:01.033 "is_configured": true, 00:10:01.033 "data_offset": 2048, 00:10:01.033 "data_size": 63488 00:10:01.033 } 00:10:01.033 ] 00:10:01.033 }' 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.033 20:22:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.293 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:01.293 20:22:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:01.553 [2024-11-26 20:22:54.926583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.583 "name": "raid_bdev1", 00:10:02.583 "uuid": "52fd0db9-1410-449d-a965-785f5a9dac02", 00:10:02.583 "strip_size_kb": 64, 00:10:02.583 "state": "online", 00:10:02.583 "raid_level": "concat", 00:10:02.583 "superblock": true, 00:10:02.583 "num_base_bdevs": 3, 00:10:02.583 "num_base_bdevs_discovered": 3, 00:10:02.583 "num_base_bdevs_operational": 3, 00:10:02.583 "base_bdevs_list": [ 00:10:02.583 { 00:10:02.583 "name": "BaseBdev1", 00:10:02.583 "uuid": "c56de1e6-5184-5ad5-aa91-93d3dafd7cf9", 00:10:02.583 "is_configured": true, 00:10:02.583 "data_offset": 2048, 00:10:02.583 "data_size": 63488 00:10:02.583 }, 00:10:02.583 { 00:10:02.583 "name": "BaseBdev2", 00:10:02.583 "uuid": "e0c33274-dcd9-5a32-9bda-ff81dd9615c3", 00:10:02.583 "is_configured": true, 00:10:02.583 "data_offset": 2048, 00:10:02.583 "data_size": 63488 00:10:02.583 }, 00:10:02.583 { 00:10:02.583 "name": "BaseBdev3", 00:10:02.583 "uuid": "9e81a0f3-a117-5263-b85e-d2a0f56c93cd", 00:10:02.583 "is_configured": true, 00:10:02.583 "data_offset": 2048, 00:10:02.583 "data_size": 63488 00:10:02.583 } 00:10:02.583 ] 00:10:02.583 }' 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.583 20:22:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.843 [2024-11-26 20:22:56.276127] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.843 [2024-11-26 20:22:56.276221] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.843 [2024-11-26 20:22:56.278967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.843 [2024-11-26 20:22:56.279070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.843 [2024-11-26 20:22:56.279131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.843 [2024-11-26 20:22:56.279203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:02.843 { 00:10:02.843 "results": [ 00:10:02.843 { 00:10:02.843 "job": "raid_bdev1", 00:10:02.843 "core_mask": "0x1", 00:10:02.843 "workload": "randrw", 00:10:02.843 "percentage": 50, 00:10:02.843 "status": "finished", 00:10:02.843 "queue_depth": 1, 00:10:02.843 "io_size": 131072, 00:10:02.843 "runtime": 1.350118, 00:10:02.843 "iops": 13583.997843151488, 00:10:02.843 "mibps": 1697.999730393936, 00:10:02.843 "io_failed": 1, 00:10:02.843 "io_timeout": 0, 00:10:02.843 "avg_latency_us": 102.7995496285912, 00:10:02.843 "min_latency_us": 26.382532751091702, 00:10:02.843 "max_latency_us": 1645.5545851528384 00:10:02.843 } 00:10:02.843 ], 00:10:02.843 "core_count": 1 00:10:02.843 } 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78758 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78758 ']' 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78758 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78758 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.843 killing process with pid 78758 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78758' 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78758 00:10:02.843 [2024-11-26 20:22:56.330866] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.843 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78758 00:10:02.843 [2024-11-26 20:22:56.374611] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LMyyYMQ0BE 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:03.414 00:10:03.414 real 0m3.454s 00:10:03.414 user 0m4.236s 00:10:03.414 sys 0m0.632s 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.414 ************************************ 00:10:03.414 END TEST raid_write_error_test 00:10:03.414 ************************************ 00:10:03.414 20:22:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.414 20:22:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:03.414 20:22:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:03.414 20:22:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:03.414 20:22:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.414 20:22:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.414 ************************************ 00:10:03.414 START TEST raid_state_function_test 00:10:03.414 ************************************ 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78885 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78885' 00:10:03.414 Process raid pid: 78885 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78885 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78885 ']' 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.414 20:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.414 [2024-11-26 20:22:56.922891] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:03.414 [2024-11-26 20:22:56.923129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.673 [2024-11-26 20:22:57.088512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.673 [2024-11-26 20:22:57.169902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.931 [2024-11-26 20:22:57.246471] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.932 [2024-11-26 20:22:57.246593] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.500 [2024-11-26 20:22:57.805436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.500 [2024-11-26 20:22:57.805491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.500 [2024-11-26 20:22:57.805504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.500 [2024-11-26 20:22:57.805515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.500 [2024-11-26 20:22:57.805521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.500 [2024-11-26 20:22:57.805534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.500 "name": "Existed_Raid", 00:10:04.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.500 "strip_size_kb": 0, 00:10:04.500 "state": "configuring", 00:10:04.500 "raid_level": "raid1", 00:10:04.500 "superblock": false, 00:10:04.500 "num_base_bdevs": 3, 00:10:04.500 "num_base_bdevs_discovered": 0, 00:10:04.500 "num_base_bdevs_operational": 3, 00:10:04.500 "base_bdevs_list": [ 00:10:04.500 { 00:10:04.500 "name": "BaseBdev1", 00:10:04.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.500 "is_configured": false, 00:10:04.500 "data_offset": 0, 00:10:04.500 "data_size": 0 00:10:04.500 }, 00:10:04.500 { 00:10:04.500 "name": "BaseBdev2", 00:10:04.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.500 "is_configured": false, 00:10:04.500 "data_offset": 0, 00:10:04.500 "data_size": 0 00:10:04.500 }, 00:10:04.500 { 00:10:04.500 "name": "BaseBdev3", 00:10:04.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.500 "is_configured": false, 00:10:04.500 "data_offset": 0, 00:10:04.500 "data_size": 0 00:10:04.500 } 00:10:04.500 ] 00:10:04.500 }' 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.500 20:22:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.760 [2024-11-26 20:22:58.268552] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.760 [2024-11-26 20:22:58.268665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.760 [2024-11-26 20:22:58.276563] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.760 [2024-11-26 20:22:58.276610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.760 [2024-11-26 20:22:58.276627] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.760 [2024-11-26 20:22:58.276638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.760 [2024-11-26 20:22:58.276644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.760 [2024-11-26 20:22:58.276653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.760 [2024-11-26 20:22:58.299371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.760 BaseBdev1 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.760 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.020 [ 00:10:05.020 { 00:10:05.020 "name": "BaseBdev1", 00:10:05.020 "aliases": [ 00:10:05.020 "bc35c15a-30c8-4303-980f-5afc5edbe8a9" 00:10:05.020 ], 00:10:05.020 "product_name": "Malloc disk", 00:10:05.020 "block_size": 512, 00:10:05.020 "num_blocks": 65536, 00:10:05.020 "uuid": "bc35c15a-30c8-4303-980f-5afc5edbe8a9", 00:10:05.020 "assigned_rate_limits": { 00:10:05.020 "rw_ios_per_sec": 0, 00:10:05.020 "rw_mbytes_per_sec": 0, 00:10:05.020 "r_mbytes_per_sec": 0, 00:10:05.020 "w_mbytes_per_sec": 0 00:10:05.020 }, 00:10:05.020 "claimed": true, 00:10:05.020 "claim_type": "exclusive_write", 00:10:05.020 "zoned": false, 00:10:05.020 "supported_io_types": { 00:10:05.020 "read": true, 00:10:05.020 "write": true, 00:10:05.020 "unmap": true, 00:10:05.020 "flush": true, 00:10:05.020 "reset": true, 00:10:05.020 "nvme_admin": false, 00:10:05.020 "nvme_io": false, 00:10:05.020 "nvme_io_md": false, 00:10:05.020 "write_zeroes": true, 00:10:05.020 "zcopy": true, 00:10:05.020 "get_zone_info": false, 00:10:05.020 "zone_management": false, 00:10:05.020 "zone_append": false, 00:10:05.020 "compare": false, 00:10:05.020 "compare_and_write": false, 00:10:05.020 "abort": true, 00:10:05.020 "seek_hole": false, 00:10:05.020 "seek_data": false, 00:10:05.020 "copy": true, 00:10:05.020 "nvme_iov_md": false 00:10:05.020 }, 00:10:05.020 "memory_domains": [ 00:10:05.020 { 00:10:05.020 "dma_device_id": "system", 00:10:05.020 "dma_device_type": 1 00:10:05.020 }, 00:10:05.020 { 00:10:05.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.020 "dma_device_type": 2 00:10:05.020 } 00:10:05.020 ], 00:10:05.020 "driver_specific": {} 00:10:05.020 } 00:10:05.020 ] 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.020 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.020 "name": "Existed_Raid", 00:10:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.020 "strip_size_kb": 0, 00:10:05.020 "state": "configuring", 00:10:05.020 "raid_level": "raid1", 00:10:05.020 "superblock": false, 00:10:05.020 "num_base_bdevs": 3, 00:10:05.020 "num_base_bdevs_discovered": 1, 00:10:05.020 "num_base_bdevs_operational": 3, 00:10:05.020 "base_bdevs_list": [ 00:10:05.020 { 00:10:05.020 "name": "BaseBdev1", 00:10:05.020 "uuid": "bc35c15a-30c8-4303-980f-5afc5edbe8a9", 00:10:05.020 "is_configured": true, 00:10:05.020 "data_offset": 0, 00:10:05.020 "data_size": 65536 00:10:05.020 }, 00:10:05.020 { 00:10:05.020 "name": "BaseBdev2", 00:10:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.020 "is_configured": false, 00:10:05.020 "data_offset": 0, 00:10:05.020 "data_size": 0 00:10:05.020 }, 00:10:05.020 { 00:10:05.020 "name": "BaseBdev3", 00:10:05.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.021 "is_configured": false, 00:10:05.021 "data_offset": 0, 00:10:05.021 "data_size": 0 00:10:05.021 } 00:10:05.021 ] 00:10:05.021 }' 00:10:05.021 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.021 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.280 [2024-11-26 20:22:58.742747] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.280 [2024-11-26 20:22:58.742866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.280 [2024-11-26 20:22:58.754778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.280 [2024-11-26 20:22:58.756933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.280 [2024-11-26 20:22:58.757022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.280 [2024-11-26 20:22:58.757056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.280 [2024-11-26 20:22:58.757084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.280 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.281 "name": "Existed_Raid", 00:10:05.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.281 "strip_size_kb": 0, 00:10:05.281 "state": "configuring", 00:10:05.281 "raid_level": "raid1", 00:10:05.281 "superblock": false, 00:10:05.281 "num_base_bdevs": 3, 00:10:05.281 "num_base_bdevs_discovered": 1, 00:10:05.281 "num_base_bdevs_operational": 3, 00:10:05.281 "base_bdevs_list": [ 00:10:05.281 { 00:10:05.281 "name": "BaseBdev1", 00:10:05.281 "uuid": "bc35c15a-30c8-4303-980f-5afc5edbe8a9", 00:10:05.281 "is_configured": true, 00:10:05.281 "data_offset": 0, 00:10:05.281 "data_size": 65536 00:10:05.281 }, 00:10:05.281 { 00:10:05.281 "name": "BaseBdev2", 00:10:05.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.281 "is_configured": false, 00:10:05.281 "data_offset": 0, 00:10:05.281 "data_size": 0 00:10:05.281 }, 00:10:05.281 { 00:10:05.281 "name": "BaseBdev3", 00:10:05.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.281 "is_configured": false, 00:10:05.281 "data_offset": 0, 00:10:05.281 "data_size": 0 00:10:05.281 } 00:10:05.281 ] 00:10:05.281 }' 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.281 20:22:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.850 [2024-11-26 20:22:59.227717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.850 BaseBdev2 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.850 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.851 [ 00:10:05.851 { 00:10:05.851 "name": "BaseBdev2", 00:10:05.851 "aliases": [ 00:10:05.851 "a69e4f18-15ed-45c9-9a9a-9c869986960d" 00:10:05.851 ], 00:10:05.851 "product_name": "Malloc disk", 00:10:05.851 "block_size": 512, 00:10:05.851 "num_blocks": 65536, 00:10:05.851 "uuid": "a69e4f18-15ed-45c9-9a9a-9c869986960d", 00:10:05.851 "assigned_rate_limits": { 00:10:05.851 "rw_ios_per_sec": 0, 00:10:05.851 "rw_mbytes_per_sec": 0, 00:10:05.851 "r_mbytes_per_sec": 0, 00:10:05.851 "w_mbytes_per_sec": 0 00:10:05.851 }, 00:10:05.851 "claimed": true, 00:10:05.851 "claim_type": "exclusive_write", 00:10:05.851 "zoned": false, 00:10:05.851 "supported_io_types": { 00:10:05.851 "read": true, 00:10:05.851 "write": true, 00:10:05.851 "unmap": true, 00:10:05.851 "flush": true, 00:10:05.851 "reset": true, 00:10:05.851 "nvme_admin": false, 00:10:05.851 "nvme_io": false, 00:10:05.851 "nvme_io_md": false, 00:10:05.851 "write_zeroes": true, 00:10:05.851 "zcopy": true, 00:10:05.851 "get_zone_info": false, 00:10:05.851 "zone_management": false, 00:10:05.851 "zone_append": false, 00:10:05.851 "compare": false, 00:10:05.851 "compare_and_write": false, 00:10:05.851 "abort": true, 00:10:05.851 "seek_hole": false, 00:10:05.851 "seek_data": false, 00:10:05.851 "copy": true, 00:10:05.851 "nvme_iov_md": false 00:10:05.851 }, 00:10:05.851 "memory_domains": [ 00:10:05.851 { 00:10:05.851 "dma_device_id": "system", 00:10:05.851 "dma_device_type": 1 00:10:05.851 }, 00:10:05.851 { 00:10:05.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.851 "dma_device_type": 2 00:10:05.851 } 00:10:05.851 ], 00:10:05.851 "driver_specific": {} 00:10:05.851 } 00:10:05.851 ] 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.851 "name": "Existed_Raid", 00:10:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.851 "strip_size_kb": 0, 00:10:05.851 "state": "configuring", 00:10:05.851 "raid_level": "raid1", 00:10:05.851 "superblock": false, 00:10:05.851 "num_base_bdevs": 3, 00:10:05.851 "num_base_bdevs_discovered": 2, 00:10:05.851 "num_base_bdevs_operational": 3, 00:10:05.851 "base_bdevs_list": [ 00:10:05.851 { 00:10:05.851 "name": "BaseBdev1", 00:10:05.851 "uuid": "bc35c15a-30c8-4303-980f-5afc5edbe8a9", 00:10:05.851 "is_configured": true, 00:10:05.851 "data_offset": 0, 00:10:05.851 "data_size": 65536 00:10:05.851 }, 00:10:05.851 { 00:10:05.851 "name": "BaseBdev2", 00:10:05.851 "uuid": "a69e4f18-15ed-45c9-9a9a-9c869986960d", 00:10:05.851 "is_configured": true, 00:10:05.851 "data_offset": 0, 00:10:05.851 "data_size": 65536 00:10:05.851 }, 00:10:05.851 { 00:10:05.851 "name": "BaseBdev3", 00:10:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.851 "is_configured": false, 00:10:05.851 "data_offset": 0, 00:10:05.851 "data_size": 0 00:10:05.851 } 00:10:05.851 ] 00:10:05.851 }' 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.851 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.421 [2024-11-26 20:22:59.720097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.421 [2024-11-26 20:22:59.720234] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:06.421 [2024-11-26 20:22:59.720267] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:06.421 [2024-11-26 20:22:59.720675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:06.421 [2024-11-26 20:22:59.720884] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:06.421 [2024-11-26 20:22:59.720935] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:06.421 [2024-11-26 20:22:59.721181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.421 BaseBdev3 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.421 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.421 [ 00:10:06.421 { 00:10:06.421 "name": "BaseBdev3", 00:10:06.421 "aliases": [ 00:10:06.421 "1e4ca1a4-c5ee-43c7-a937-7bb6bbd6690c" 00:10:06.421 ], 00:10:06.421 "product_name": "Malloc disk", 00:10:06.421 "block_size": 512, 00:10:06.421 "num_blocks": 65536, 00:10:06.421 "uuid": "1e4ca1a4-c5ee-43c7-a937-7bb6bbd6690c", 00:10:06.421 "assigned_rate_limits": { 00:10:06.421 "rw_ios_per_sec": 0, 00:10:06.421 "rw_mbytes_per_sec": 0, 00:10:06.421 "r_mbytes_per_sec": 0, 00:10:06.421 "w_mbytes_per_sec": 0 00:10:06.421 }, 00:10:06.421 "claimed": true, 00:10:06.421 "claim_type": "exclusive_write", 00:10:06.421 "zoned": false, 00:10:06.421 "supported_io_types": { 00:10:06.421 "read": true, 00:10:06.421 "write": true, 00:10:06.421 "unmap": true, 00:10:06.421 "flush": true, 00:10:06.421 "reset": true, 00:10:06.421 "nvme_admin": false, 00:10:06.421 "nvme_io": false, 00:10:06.421 "nvme_io_md": false, 00:10:06.421 "write_zeroes": true, 00:10:06.421 "zcopy": true, 00:10:06.421 "get_zone_info": false, 00:10:06.421 "zone_management": false, 00:10:06.422 "zone_append": false, 00:10:06.422 "compare": false, 00:10:06.422 "compare_and_write": false, 00:10:06.422 "abort": true, 00:10:06.422 "seek_hole": false, 00:10:06.422 "seek_data": false, 00:10:06.422 "copy": true, 00:10:06.422 "nvme_iov_md": false 00:10:06.422 }, 00:10:06.422 "memory_domains": [ 00:10:06.422 { 00:10:06.422 "dma_device_id": "system", 00:10:06.422 "dma_device_type": 1 00:10:06.422 }, 00:10:06.422 { 00:10:06.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.422 "dma_device_type": 2 00:10:06.422 } 00:10:06.422 ], 00:10:06.422 "driver_specific": {} 00:10:06.422 } 00:10:06.422 ] 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.422 "name": "Existed_Raid", 00:10:06.422 "uuid": "4349eb1f-34f9-43bd-ba5e-ead8f154b4e0", 00:10:06.422 "strip_size_kb": 0, 00:10:06.422 "state": "online", 00:10:06.422 "raid_level": "raid1", 00:10:06.422 "superblock": false, 00:10:06.422 "num_base_bdevs": 3, 00:10:06.422 "num_base_bdevs_discovered": 3, 00:10:06.422 "num_base_bdevs_operational": 3, 00:10:06.422 "base_bdevs_list": [ 00:10:06.422 { 00:10:06.422 "name": "BaseBdev1", 00:10:06.422 "uuid": "bc35c15a-30c8-4303-980f-5afc5edbe8a9", 00:10:06.422 "is_configured": true, 00:10:06.422 "data_offset": 0, 00:10:06.422 "data_size": 65536 00:10:06.422 }, 00:10:06.422 { 00:10:06.422 "name": "BaseBdev2", 00:10:06.422 "uuid": "a69e4f18-15ed-45c9-9a9a-9c869986960d", 00:10:06.422 "is_configured": true, 00:10:06.422 "data_offset": 0, 00:10:06.422 "data_size": 65536 00:10:06.422 }, 00:10:06.422 { 00:10:06.422 "name": "BaseBdev3", 00:10:06.422 "uuid": "1e4ca1a4-c5ee-43c7-a937-7bb6bbd6690c", 00:10:06.422 "is_configured": true, 00:10:06.422 "data_offset": 0, 00:10:06.422 "data_size": 65536 00:10:06.422 } 00:10:06.422 ] 00:10:06.422 }' 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.422 20:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.682 [2024-11-26 20:23:00.199718] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.682 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.942 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.942 "name": "Existed_Raid", 00:10:06.942 "aliases": [ 00:10:06.942 "4349eb1f-34f9-43bd-ba5e-ead8f154b4e0" 00:10:06.942 ], 00:10:06.942 "product_name": "Raid Volume", 00:10:06.942 "block_size": 512, 00:10:06.942 "num_blocks": 65536, 00:10:06.942 "uuid": "4349eb1f-34f9-43bd-ba5e-ead8f154b4e0", 00:10:06.942 "assigned_rate_limits": { 00:10:06.942 "rw_ios_per_sec": 0, 00:10:06.942 "rw_mbytes_per_sec": 0, 00:10:06.942 "r_mbytes_per_sec": 0, 00:10:06.942 "w_mbytes_per_sec": 0 00:10:06.942 }, 00:10:06.942 "claimed": false, 00:10:06.942 "zoned": false, 00:10:06.942 "supported_io_types": { 00:10:06.942 "read": true, 00:10:06.942 "write": true, 00:10:06.942 "unmap": false, 00:10:06.942 "flush": false, 00:10:06.942 "reset": true, 00:10:06.942 "nvme_admin": false, 00:10:06.942 "nvme_io": false, 00:10:06.942 "nvme_io_md": false, 00:10:06.942 "write_zeroes": true, 00:10:06.942 "zcopy": false, 00:10:06.942 "get_zone_info": false, 00:10:06.942 "zone_management": false, 00:10:06.942 "zone_append": false, 00:10:06.942 "compare": false, 00:10:06.942 "compare_and_write": false, 00:10:06.942 "abort": false, 00:10:06.942 "seek_hole": false, 00:10:06.942 "seek_data": false, 00:10:06.942 "copy": false, 00:10:06.942 "nvme_iov_md": false 00:10:06.942 }, 00:10:06.942 "memory_domains": [ 00:10:06.942 { 00:10:06.942 "dma_device_id": "system", 00:10:06.942 "dma_device_type": 1 00:10:06.942 }, 00:10:06.942 { 00:10:06.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.942 "dma_device_type": 2 00:10:06.942 }, 00:10:06.942 { 00:10:06.942 "dma_device_id": "system", 00:10:06.942 "dma_device_type": 1 00:10:06.942 }, 00:10:06.942 { 00:10:06.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.943 "dma_device_type": 2 00:10:06.943 }, 00:10:06.943 { 00:10:06.943 "dma_device_id": "system", 00:10:06.943 "dma_device_type": 1 00:10:06.943 }, 00:10:06.943 { 00:10:06.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.943 "dma_device_type": 2 00:10:06.943 } 00:10:06.943 ], 00:10:06.943 "driver_specific": { 00:10:06.943 "raid": { 00:10:06.943 "uuid": "4349eb1f-34f9-43bd-ba5e-ead8f154b4e0", 00:10:06.943 "strip_size_kb": 0, 00:10:06.943 "state": "online", 00:10:06.943 "raid_level": "raid1", 00:10:06.943 "superblock": false, 00:10:06.943 "num_base_bdevs": 3, 00:10:06.943 "num_base_bdevs_discovered": 3, 00:10:06.943 "num_base_bdevs_operational": 3, 00:10:06.943 "base_bdevs_list": [ 00:10:06.943 { 00:10:06.943 "name": "BaseBdev1", 00:10:06.943 "uuid": "bc35c15a-30c8-4303-980f-5afc5edbe8a9", 00:10:06.943 "is_configured": true, 00:10:06.943 "data_offset": 0, 00:10:06.943 "data_size": 65536 00:10:06.943 }, 00:10:06.943 { 00:10:06.943 "name": "BaseBdev2", 00:10:06.943 "uuid": "a69e4f18-15ed-45c9-9a9a-9c869986960d", 00:10:06.943 "is_configured": true, 00:10:06.943 "data_offset": 0, 00:10:06.943 "data_size": 65536 00:10:06.943 }, 00:10:06.943 { 00:10:06.943 "name": "BaseBdev3", 00:10:06.943 "uuid": "1e4ca1a4-c5ee-43c7-a937-7bb6bbd6690c", 00:10:06.943 "is_configured": true, 00:10:06.943 "data_offset": 0, 00:10:06.943 "data_size": 65536 00:10:06.943 } 00:10:06.943 ] 00:10:06.943 } 00:10:06.943 } 00:10:06.943 }' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:06.943 BaseBdev2 00:10:06.943 BaseBdev3' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.943 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.203 [2024-11-26 20:23:00.498940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.203 "name": "Existed_Raid", 00:10:07.203 "uuid": "4349eb1f-34f9-43bd-ba5e-ead8f154b4e0", 00:10:07.203 "strip_size_kb": 0, 00:10:07.203 "state": "online", 00:10:07.203 "raid_level": "raid1", 00:10:07.203 "superblock": false, 00:10:07.203 "num_base_bdevs": 3, 00:10:07.203 "num_base_bdevs_discovered": 2, 00:10:07.203 "num_base_bdevs_operational": 2, 00:10:07.203 "base_bdevs_list": [ 00:10:07.203 { 00:10:07.203 "name": null, 00:10:07.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.203 "is_configured": false, 00:10:07.203 "data_offset": 0, 00:10:07.203 "data_size": 65536 00:10:07.203 }, 00:10:07.203 { 00:10:07.203 "name": "BaseBdev2", 00:10:07.203 "uuid": "a69e4f18-15ed-45c9-9a9a-9c869986960d", 00:10:07.203 "is_configured": true, 00:10:07.203 "data_offset": 0, 00:10:07.203 "data_size": 65536 00:10:07.203 }, 00:10:07.203 { 00:10:07.203 "name": "BaseBdev3", 00:10:07.203 "uuid": "1e4ca1a4-c5ee-43c7-a937-7bb6bbd6690c", 00:10:07.203 "is_configured": true, 00:10:07.203 "data_offset": 0, 00:10:07.203 "data_size": 65536 00:10:07.203 } 00:10:07.203 ] 00:10:07.203 }' 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.203 20:23:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:07.480 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.480 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.480 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.480 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 [2024-11-26 20:23:01.075351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 [2024-11-26 20:23:01.160709] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.741 [2024-11-26 20:23:01.160824] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.741 [2024-11-26 20:23:01.175983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.741 [2024-11-26 20:23:01.176037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.741 [2024-11-26 20:23:01.176054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 BaseBdev2 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 [ 00:10:07.741 { 00:10:07.741 "name": "BaseBdev2", 00:10:07.741 "aliases": [ 00:10:07.741 "fa07ab4a-8aad-477d-aa37-37a0e4ea598c" 00:10:07.741 ], 00:10:07.741 "product_name": "Malloc disk", 00:10:07.741 "block_size": 512, 00:10:07.741 "num_blocks": 65536, 00:10:07.741 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:07.741 "assigned_rate_limits": { 00:10:07.741 "rw_ios_per_sec": 0, 00:10:07.741 "rw_mbytes_per_sec": 0, 00:10:07.741 "r_mbytes_per_sec": 0, 00:10:07.741 "w_mbytes_per_sec": 0 00:10:07.741 }, 00:10:07.741 "claimed": false, 00:10:07.741 "zoned": false, 00:10:07.741 "supported_io_types": { 00:10:07.741 "read": true, 00:10:07.741 "write": true, 00:10:07.741 "unmap": true, 00:10:07.741 "flush": true, 00:10:07.741 "reset": true, 00:10:07.741 "nvme_admin": false, 00:10:07.741 "nvme_io": false, 00:10:07.741 "nvme_io_md": false, 00:10:07.741 "write_zeroes": true, 00:10:07.741 "zcopy": true, 00:10:07.741 "get_zone_info": false, 00:10:07.741 "zone_management": false, 00:10:07.741 "zone_append": false, 00:10:07.741 "compare": false, 00:10:07.741 "compare_and_write": false, 00:10:07.741 "abort": true, 00:10:07.741 "seek_hole": false, 00:10:07.741 "seek_data": false, 00:10:07.741 "copy": true, 00:10:07.741 "nvme_iov_md": false 00:10:07.741 }, 00:10:07.741 "memory_domains": [ 00:10:07.741 { 00:10:07.741 "dma_device_id": "system", 00:10:07.741 "dma_device_type": 1 00:10:07.741 }, 00:10:07.741 { 00:10:07.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.741 "dma_device_type": 2 00:10:07.741 } 00:10:07.741 ], 00:10:07.741 "driver_specific": {} 00:10:07.741 } 00:10:07.741 ] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.002 BaseBdev3 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.002 [ 00:10:08.002 { 00:10:08.002 "name": "BaseBdev3", 00:10:08.002 "aliases": [ 00:10:08.002 "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030" 00:10:08.002 ], 00:10:08.002 "product_name": "Malloc disk", 00:10:08.002 "block_size": 512, 00:10:08.002 "num_blocks": 65536, 00:10:08.002 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:08.002 "assigned_rate_limits": { 00:10:08.002 "rw_ios_per_sec": 0, 00:10:08.002 "rw_mbytes_per_sec": 0, 00:10:08.002 "r_mbytes_per_sec": 0, 00:10:08.002 "w_mbytes_per_sec": 0 00:10:08.002 }, 00:10:08.002 "claimed": false, 00:10:08.002 "zoned": false, 00:10:08.002 "supported_io_types": { 00:10:08.002 "read": true, 00:10:08.002 "write": true, 00:10:08.002 "unmap": true, 00:10:08.002 "flush": true, 00:10:08.002 "reset": true, 00:10:08.002 "nvme_admin": false, 00:10:08.002 "nvme_io": false, 00:10:08.002 "nvme_io_md": false, 00:10:08.002 "write_zeroes": true, 00:10:08.002 "zcopy": true, 00:10:08.002 "get_zone_info": false, 00:10:08.002 "zone_management": false, 00:10:08.002 "zone_append": false, 00:10:08.002 "compare": false, 00:10:08.002 "compare_and_write": false, 00:10:08.002 "abort": true, 00:10:08.002 "seek_hole": false, 00:10:08.002 "seek_data": false, 00:10:08.002 "copy": true, 00:10:08.002 "nvme_iov_md": false 00:10:08.002 }, 00:10:08.002 "memory_domains": [ 00:10:08.002 { 00:10:08.002 "dma_device_id": "system", 00:10:08.002 "dma_device_type": 1 00:10:08.002 }, 00:10:08.002 { 00:10:08.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.002 "dma_device_type": 2 00:10:08.002 } 00:10:08.002 ], 00:10:08.002 "driver_specific": {} 00:10:08.002 } 00:10:08.002 ] 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.002 [2024-11-26 20:23:01.353670] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.002 [2024-11-26 20:23:01.353820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.002 [2024-11-26 20:23:01.353877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.002 [2024-11-26 20:23:01.356049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.002 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.003 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.003 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.003 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.003 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.003 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.003 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.003 "name": "Existed_Raid", 00:10:08.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.003 "strip_size_kb": 0, 00:10:08.003 "state": "configuring", 00:10:08.003 "raid_level": "raid1", 00:10:08.003 "superblock": false, 00:10:08.003 "num_base_bdevs": 3, 00:10:08.003 "num_base_bdevs_discovered": 2, 00:10:08.003 "num_base_bdevs_operational": 3, 00:10:08.003 "base_bdevs_list": [ 00:10:08.003 { 00:10:08.003 "name": "BaseBdev1", 00:10:08.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.003 "is_configured": false, 00:10:08.003 "data_offset": 0, 00:10:08.003 "data_size": 0 00:10:08.003 }, 00:10:08.003 { 00:10:08.003 "name": "BaseBdev2", 00:10:08.003 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:08.003 "is_configured": true, 00:10:08.003 "data_offset": 0, 00:10:08.003 "data_size": 65536 00:10:08.003 }, 00:10:08.003 { 00:10:08.003 "name": "BaseBdev3", 00:10:08.003 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:08.003 "is_configured": true, 00:10:08.003 "data_offset": 0, 00:10:08.003 "data_size": 65536 00:10:08.003 } 00:10:08.003 ] 00:10:08.003 }' 00:10:08.003 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.003 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.573 [2024-11-26 20:23:01.824833] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.573 "name": "Existed_Raid", 00:10:08.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.573 "strip_size_kb": 0, 00:10:08.573 "state": "configuring", 00:10:08.573 "raid_level": "raid1", 00:10:08.573 "superblock": false, 00:10:08.573 "num_base_bdevs": 3, 00:10:08.573 "num_base_bdevs_discovered": 1, 00:10:08.573 "num_base_bdevs_operational": 3, 00:10:08.573 "base_bdevs_list": [ 00:10:08.573 { 00:10:08.573 "name": "BaseBdev1", 00:10:08.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.573 "is_configured": false, 00:10:08.573 "data_offset": 0, 00:10:08.573 "data_size": 0 00:10:08.573 }, 00:10:08.573 { 00:10:08.573 "name": null, 00:10:08.573 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:08.573 "is_configured": false, 00:10:08.573 "data_offset": 0, 00:10:08.573 "data_size": 65536 00:10:08.573 }, 00:10:08.573 { 00:10:08.573 "name": "BaseBdev3", 00:10:08.573 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:08.573 "is_configured": true, 00:10:08.573 "data_offset": 0, 00:10:08.573 "data_size": 65536 00:10:08.573 } 00:10:08.573 ] 00:10:08.573 }' 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.573 20:23:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.833 [2024-11-26 20:23:02.301198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.833 BaseBdev1 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.833 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.833 [ 00:10:08.833 { 00:10:08.833 "name": "BaseBdev1", 00:10:08.834 "aliases": [ 00:10:08.834 "eecf8025-54c8-4b66-94bd-652421487f67" 00:10:08.834 ], 00:10:08.834 "product_name": "Malloc disk", 00:10:08.834 "block_size": 512, 00:10:08.834 "num_blocks": 65536, 00:10:08.834 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:08.834 "assigned_rate_limits": { 00:10:08.834 "rw_ios_per_sec": 0, 00:10:08.834 "rw_mbytes_per_sec": 0, 00:10:08.834 "r_mbytes_per_sec": 0, 00:10:08.834 "w_mbytes_per_sec": 0 00:10:08.834 }, 00:10:08.834 "claimed": true, 00:10:08.834 "claim_type": "exclusive_write", 00:10:08.834 "zoned": false, 00:10:08.834 "supported_io_types": { 00:10:08.834 "read": true, 00:10:08.834 "write": true, 00:10:08.834 "unmap": true, 00:10:08.834 "flush": true, 00:10:08.834 "reset": true, 00:10:08.834 "nvme_admin": false, 00:10:08.834 "nvme_io": false, 00:10:08.834 "nvme_io_md": false, 00:10:08.834 "write_zeroes": true, 00:10:08.834 "zcopy": true, 00:10:08.834 "get_zone_info": false, 00:10:08.834 "zone_management": false, 00:10:08.834 "zone_append": false, 00:10:08.834 "compare": false, 00:10:08.834 "compare_and_write": false, 00:10:08.834 "abort": true, 00:10:08.834 "seek_hole": false, 00:10:08.834 "seek_data": false, 00:10:08.834 "copy": true, 00:10:08.834 "nvme_iov_md": false 00:10:08.834 }, 00:10:08.834 "memory_domains": [ 00:10:08.834 { 00:10:08.834 "dma_device_id": "system", 00:10:08.834 "dma_device_type": 1 00:10:08.834 }, 00:10:08.834 { 00:10:08.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.834 "dma_device_type": 2 00:10:08.834 } 00:10:08.834 ], 00:10:08.834 "driver_specific": {} 00:10:08.834 } 00:10:08.834 ] 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.834 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.093 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.093 "name": "Existed_Raid", 00:10:09.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.093 "strip_size_kb": 0, 00:10:09.093 "state": "configuring", 00:10:09.093 "raid_level": "raid1", 00:10:09.093 "superblock": false, 00:10:09.093 "num_base_bdevs": 3, 00:10:09.093 "num_base_bdevs_discovered": 2, 00:10:09.093 "num_base_bdevs_operational": 3, 00:10:09.093 "base_bdevs_list": [ 00:10:09.093 { 00:10:09.093 "name": "BaseBdev1", 00:10:09.093 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:09.093 "is_configured": true, 00:10:09.093 "data_offset": 0, 00:10:09.093 "data_size": 65536 00:10:09.093 }, 00:10:09.093 { 00:10:09.093 "name": null, 00:10:09.093 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:09.093 "is_configured": false, 00:10:09.093 "data_offset": 0, 00:10:09.093 "data_size": 65536 00:10:09.093 }, 00:10:09.093 { 00:10:09.093 "name": "BaseBdev3", 00:10:09.093 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:09.093 "is_configured": true, 00:10:09.093 "data_offset": 0, 00:10:09.093 "data_size": 65536 00:10:09.093 } 00:10:09.093 ] 00:10:09.093 }' 00:10:09.093 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.093 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.354 [2024-11-26 20:23:02.880400] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.354 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.614 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.614 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.614 "name": "Existed_Raid", 00:10:09.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.614 "strip_size_kb": 0, 00:10:09.614 "state": "configuring", 00:10:09.614 "raid_level": "raid1", 00:10:09.614 "superblock": false, 00:10:09.614 "num_base_bdevs": 3, 00:10:09.614 "num_base_bdevs_discovered": 1, 00:10:09.614 "num_base_bdevs_operational": 3, 00:10:09.614 "base_bdevs_list": [ 00:10:09.614 { 00:10:09.614 "name": "BaseBdev1", 00:10:09.614 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:09.614 "is_configured": true, 00:10:09.614 "data_offset": 0, 00:10:09.614 "data_size": 65536 00:10:09.614 }, 00:10:09.614 { 00:10:09.614 "name": null, 00:10:09.614 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:09.614 "is_configured": false, 00:10:09.614 "data_offset": 0, 00:10:09.614 "data_size": 65536 00:10:09.614 }, 00:10:09.614 { 00:10:09.614 "name": null, 00:10:09.614 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:09.614 "is_configured": false, 00:10:09.614 "data_offset": 0, 00:10:09.614 "data_size": 65536 00:10:09.614 } 00:10:09.614 ] 00:10:09.614 }' 00:10:09.614 20:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.614 20:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.873 [2024-11-26 20:23:03.407575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.873 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.132 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.132 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.132 "name": "Existed_Raid", 00:10:10.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.132 "strip_size_kb": 0, 00:10:10.132 "state": "configuring", 00:10:10.132 "raid_level": "raid1", 00:10:10.132 "superblock": false, 00:10:10.132 "num_base_bdevs": 3, 00:10:10.132 "num_base_bdevs_discovered": 2, 00:10:10.132 "num_base_bdevs_operational": 3, 00:10:10.132 "base_bdevs_list": [ 00:10:10.132 { 00:10:10.132 "name": "BaseBdev1", 00:10:10.132 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:10.132 "is_configured": true, 00:10:10.132 "data_offset": 0, 00:10:10.132 "data_size": 65536 00:10:10.132 }, 00:10:10.132 { 00:10:10.132 "name": null, 00:10:10.132 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:10.132 "is_configured": false, 00:10:10.132 "data_offset": 0, 00:10:10.132 "data_size": 65536 00:10:10.132 }, 00:10:10.132 { 00:10:10.132 "name": "BaseBdev3", 00:10:10.132 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:10.132 "is_configured": true, 00:10:10.132 "data_offset": 0, 00:10:10.132 "data_size": 65536 00:10:10.132 } 00:10:10.132 ] 00:10:10.132 }' 00:10:10.132 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.132 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.391 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:10.391 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.391 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.391 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.391 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.650 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.651 [2024-11-26 20:23:03.954727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.651 20:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.651 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.651 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.651 "name": "Existed_Raid", 00:10:10.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.651 "strip_size_kb": 0, 00:10:10.651 "state": "configuring", 00:10:10.651 "raid_level": "raid1", 00:10:10.651 "superblock": false, 00:10:10.651 "num_base_bdevs": 3, 00:10:10.651 "num_base_bdevs_discovered": 1, 00:10:10.651 "num_base_bdevs_operational": 3, 00:10:10.651 "base_bdevs_list": [ 00:10:10.651 { 00:10:10.651 "name": null, 00:10:10.651 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:10.651 "is_configured": false, 00:10:10.651 "data_offset": 0, 00:10:10.651 "data_size": 65536 00:10:10.651 }, 00:10:10.651 { 00:10:10.651 "name": null, 00:10:10.651 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:10.651 "is_configured": false, 00:10:10.651 "data_offset": 0, 00:10:10.651 "data_size": 65536 00:10:10.651 }, 00:10:10.651 { 00:10:10.651 "name": "BaseBdev3", 00:10:10.651 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:10.651 "is_configured": true, 00:10:10.651 "data_offset": 0, 00:10:10.651 "data_size": 65536 00:10:10.651 } 00:10:10.651 ] 00:10:10.651 }' 00:10:10.651 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.651 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.910 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.910 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.910 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.910 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.910 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.910 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:10.910 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:11.168 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.168 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.169 [2024-11-26 20:23:04.466282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.169 "name": "Existed_Raid", 00:10:11.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.169 "strip_size_kb": 0, 00:10:11.169 "state": "configuring", 00:10:11.169 "raid_level": "raid1", 00:10:11.169 "superblock": false, 00:10:11.169 "num_base_bdevs": 3, 00:10:11.169 "num_base_bdevs_discovered": 2, 00:10:11.169 "num_base_bdevs_operational": 3, 00:10:11.169 "base_bdevs_list": [ 00:10:11.169 { 00:10:11.169 "name": null, 00:10:11.169 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:11.169 "is_configured": false, 00:10:11.169 "data_offset": 0, 00:10:11.169 "data_size": 65536 00:10:11.169 }, 00:10:11.169 { 00:10:11.169 "name": "BaseBdev2", 00:10:11.169 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:11.169 "is_configured": true, 00:10:11.169 "data_offset": 0, 00:10:11.169 "data_size": 65536 00:10:11.169 }, 00:10:11.169 { 00:10:11.169 "name": "BaseBdev3", 00:10:11.169 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:11.169 "is_configured": true, 00:10:11.169 "data_offset": 0, 00:10:11.169 "data_size": 65536 00:10:11.169 } 00:10:11.169 ] 00:10:11.169 }' 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.169 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.427 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.427 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 20:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:11.686 20:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eecf8025-54c8-4b66-94bd-652421487f67 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.686 [2024-11-26 20:23:05.114816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:11.686 [2024-11-26 20:23:05.114954] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:11.686 [2024-11-26 20:23:05.114985] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:11.686 [2024-11-26 20:23:05.115324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:11.686 [2024-11-26 20:23:05.115536] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:11.686 [2024-11-26 20:23:05.115597] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:11.686 [2024-11-26 20:23:05.115862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.686 NewBaseBdev 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.686 [ 00:10:11.686 { 00:10:11.686 "name": "NewBaseBdev", 00:10:11.686 "aliases": [ 00:10:11.686 "eecf8025-54c8-4b66-94bd-652421487f67" 00:10:11.686 ], 00:10:11.686 "product_name": "Malloc disk", 00:10:11.686 "block_size": 512, 00:10:11.686 "num_blocks": 65536, 00:10:11.686 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:11.686 "assigned_rate_limits": { 00:10:11.686 "rw_ios_per_sec": 0, 00:10:11.686 "rw_mbytes_per_sec": 0, 00:10:11.686 "r_mbytes_per_sec": 0, 00:10:11.686 "w_mbytes_per_sec": 0 00:10:11.686 }, 00:10:11.686 "claimed": true, 00:10:11.686 "claim_type": "exclusive_write", 00:10:11.686 "zoned": false, 00:10:11.686 "supported_io_types": { 00:10:11.686 "read": true, 00:10:11.686 "write": true, 00:10:11.686 "unmap": true, 00:10:11.686 "flush": true, 00:10:11.686 "reset": true, 00:10:11.686 "nvme_admin": false, 00:10:11.686 "nvme_io": false, 00:10:11.686 "nvme_io_md": false, 00:10:11.686 "write_zeroes": true, 00:10:11.686 "zcopy": true, 00:10:11.686 "get_zone_info": false, 00:10:11.686 "zone_management": false, 00:10:11.686 "zone_append": false, 00:10:11.686 "compare": false, 00:10:11.686 "compare_and_write": false, 00:10:11.686 "abort": true, 00:10:11.686 "seek_hole": false, 00:10:11.686 "seek_data": false, 00:10:11.686 "copy": true, 00:10:11.686 "nvme_iov_md": false 00:10:11.686 }, 00:10:11.686 "memory_domains": [ 00:10:11.686 { 00:10:11.686 "dma_device_id": "system", 00:10:11.686 "dma_device_type": 1 00:10:11.686 }, 00:10:11.686 { 00:10:11.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.686 "dma_device_type": 2 00:10:11.686 } 00:10:11.686 ], 00:10:11.686 "driver_specific": {} 00:10:11.686 } 00:10:11.686 ] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.686 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.686 "name": "Existed_Raid", 00:10:11.686 "uuid": "2ec2e894-4b8a-4e7b-b9de-4cd83132a731", 00:10:11.686 "strip_size_kb": 0, 00:10:11.686 "state": "online", 00:10:11.686 "raid_level": "raid1", 00:10:11.686 "superblock": false, 00:10:11.687 "num_base_bdevs": 3, 00:10:11.687 "num_base_bdevs_discovered": 3, 00:10:11.687 "num_base_bdevs_operational": 3, 00:10:11.687 "base_bdevs_list": [ 00:10:11.687 { 00:10:11.687 "name": "NewBaseBdev", 00:10:11.687 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:11.687 "is_configured": true, 00:10:11.687 "data_offset": 0, 00:10:11.687 "data_size": 65536 00:10:11.687 }, 00:10:11.687 { 00:10:11.687 "name": "BaseBdev2", 00:10:11.687 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:11.687 "is_configured": true, 00:10:11.687 "data_offset": 0, 00:10:11.687 "data_size": 65536 00:10:11.687 }, 00:10:11.687 { 00:10:11.687 "name": "BaseBdev3", 00:10:11.687 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:11.687 "is_configured": true, 00:10:11.687 "data_offset": 0, 00:10:11.687 "data_size": 65536 00:10:11.687 } 00:10:11.687 ] 00:10:11.687 }' 00:10:11.687 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.687 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.282 [2024-11-26 20:23:05.630377] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.282 "name": "Existed_Raid", 00:10:12.282 "aliases": [ 00:10:12.282 "2ec2e894-4b8a-4e7b-b9de-4cd83132a731" 00:10:12.282 ], 00:10:12.282 "product_name": "Raid Volume", 00:10:12.282 "block_size": 512, 00:10:12.282 "num_blocks": 65536, 00:10:12.282 "uuid": "2ec2e894-4b8a-4e7b-b9de-4cd83132a731", 00:10:12.282 "assigned_rate_limits": { 00:10:12.282 "rw_ios_per_sec": 0, 00:10:12.282 "rw_mbytes_per_sec": 0, 00:10:12.282 "r_mbytes_per_sec": 0, 00:10:12.282 "w_mbytes_per_sec": 0 00:10:12.282 }, 00:10:12.282 "claimed": false, 00:10:12.282 "zoned": false, 00:10:12.282 "supported_io_types": { 00:10:12.282 "read": true, 00:10:12.282 "write": true, 00:10:12.282 "unmap": false, 00:10:12.282 "flush": false, 00:10:12.282 "reset": true, 00:10:12.282 "nvme_admin": false, 00:10:12.282 "nvme_io": false, 00:10:12.282 "nvme_io_md": false, 00:10:12.282 "write_zeroes": true, 00:10:12.282 "zcopy": false, 00:10:12.282 "get_zone_info": false, 00:10:12.282 "zone_management": false, 00:10:12.282 "zone_append": false, 00:10:12.282 "compare": false, 00:10:12.282 "compare_and_write": false, 00:10:12.282 "abort": false, 00:10:12.282 "seek_hole": false, 00:10:12.282 "seek_data": false, 00:10:12.282 "copy": false, 00:10:12.282 "nvme_iov_md": false 00:10:12.282 }, 00:10:12.282 "memory_domains": [ 00:10:12.282 { 00:10:12.282 "dma_device_id": "system", 00:10:12.282 "dma_device_type": 1 00:10:12.282 }, 00:10:12.282 { 00:10:12.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.282 "dma_device_type": 2 00:10:12.282 }, 00:10:12.282 { 00:10:12.282 "dma_device_id": "system", 00:10:12.282 "dma_device_type": 1 00:10:12.282 }, 00:10:12.282 { 00:10:12.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.282 "dma_device_type": 2 00:10:12.282 }, 00:10:12.282 { 00:10:12.282 "dma_device_id": "system", 00:10:12.282 "dma_device_type": 1 00:10:12.282 }, 00:10:12.282 { 00:10:12.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.282 "dma_device_type": 2 00:10:12.282 } 00:10:12.282 ], 00:10:12.282 "driver_specific": { 00:10:12.282 "raid": { 00:10:12.282 "uuid": "2ec2e894-4b8a-4e7b-b9de-4cd83132a731", 00:10:12.282 "strip_size_kb": 0, 00:10:12.282 "state": "online", 00:10:12.282 "raid_level": "raid1", 00:10:12.282 "superblock": false, 00:10:12.282 "num_base_bdevs": 3, 00:10:12.282 "num_base_bdevs_discovered": 3, 00:10:12.282 "num_base_bdevs_operational": 3, 00:10:12.282 "base_bdevs_list": [ 00:10:12.282 { 00:10:12.282 "name": "NewBaseBdev", 00:10:12.282 "uuid": "eecf8025-54c8-4b66-94bd-652421487f67", 00:10:12.282 "is_configured": true, 00:10:12.282 "data_offset": 0, 00:10:12.282 "data_size": 65536 00:10:12.282 }, 00:10:12.282 { 00:10:12.282 "name": "BaseBdev2", 00:10:12.282 "uuid": "fa07ab4a-8aad-477d-aa37-37a0e4ea598c", 00:10:12.282 "is_configured": true, 00:10:12.282 "data_offset": 0, 00:10:12.282 "data_size": 65536 00:10:12.282 }, 00:10:12.282 { 00:10:12.282 "name": "BaseBdev3", 00:10:12.282 "uuid": "9a78c6ad-fd44-49ca-bb5c-2776a0aa9030", 00:10:12.282 "is_configured": true, 00:10:12.282 "data_offset": 0, 00:10:12.282 "data_size": 65536 00:10:12.282 } 00:10:12.282 ] 00:10:12.282 } 00:10:12.282 } 00:10:12.282 }' 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:12.282 BaseBdev2 00:10:12.282 BaseBdev3' 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.282 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:12.283 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.283 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.283 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.543 [2024-11-26 20:23:05.937522] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.543 [2024-11-26 20:23:05.937626] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.543 [2024-11-26 20:23:05.937757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.543 [2024-11-26 20:23:05.938128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.543 [2024-11-26 20:23:05.938191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78885 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78885 ']' 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78885 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78885 00:10:12.543 killing process with pid 78885 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78885' 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78885 00:10:12.543 20:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78885 00:10:12.543 [2024-11-26 20:23:05.983361] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.543 [2024-11-26 20:23:06.039077] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.111 ************************************ 00:10:13.111 END TEST raid_state_function_test 00:10:13.111 ************************************ 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:13.111 00:10:13.111 real 0m9.591s 00:10:13.111 user 0m16.209s 00:10:13.111 sys 0m1.968s 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.111 20:23:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:13.111 20:23:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:13.111 20:23:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.111 20:23:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.111 ************************************ 00:10:13.111 START TEST raid_state_function_test_sb 00:10:13.111 ************************************ 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:13.111 Process raid pid: 79495 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79495 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79495' 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79495 00:10:13.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79495 ']' 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.111 20:23:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.111 [2024-11-26 20:23:06.559351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:13.111 [2024-11-26 20:23:06.559605] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.370 [2024-11-26 20:23:06.727981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.371 [2024-11-26 20:23:06.811962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.371 [2024-11-26 20:23:06.891416] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.371 [2024-11-26 20:23:06.891544] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.940 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.940 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:13.940 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.940 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.940 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.200 [2024-11-26 20:23:07.495089] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.200 [2024-11-26 20:23:07.495204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.200 [2024-11-26 20:23:07.495222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.200 [2024-11-26 20:23:07.495234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.200 [2024-11-26 20:23:07.495242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.200 [2024-11-26 20:23:07.495254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.200 "name": "Existed_Raid", 00:10:14.200 "uuid": "7634668f-9603-44c7-ba01-37e84d0563fe", 00:10:14.200 "strip_size_kb": 0, 00:10:14.200 "state": "configuring", 00:10:14.200 "raid_level": "raid1", 00:10:14.200 "superblock": true, 00:10:14.200 "num_base_bdevs": 3, 00:10:14.200 "num_base_bdevs_discovered": 0, 00:10:14.200 "num_base_bdevs_operational": 3, 00:10:14.200 "base_bdevs_list": [ 00:10:14.200 { 00:10:14.200 "name": "BaseBdev1", 00:10:14.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.200 "is_configured": false, 00:10:14.200 "data_offset": 0, 00:10:14.200 "data_size": 0 00:10:14.200 }, 00:10:14.200 { 00:10:14.200 "name": "BaseBdev2", 00:10:14.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.200 "is_configured": false, 00:10:14.200 "data_offset": 0, 00:10:14.200 "data_size": 0 00:10:14.200 }, 00:10:14.200 { 00:10:14.200 "name": "BaseBdev3", 00:10:14.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.200 "is_configured": false, 00:10:14.200 "data_offset": 0, 00:10:14.200 "data_size": 0 00:10:14.200 } 00:10:14.200 ] 00:10:14.200 }' 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.200 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.460 [2024-11-26 20:23:07.918285] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.460 [2024-11-26 20:23:07.918338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.460 [2024-11-26 20:23:07.930332] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.460 [2024-11-26 20:23:07.930442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.460 [2024-11-26 20:23:07.930478] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.460 [2024-11-26 20:23:07.930512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.460 [2024-11-26 20:23:07.930541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.460 [2024-11-26 20:23:07.930580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.460 [2024-11-26 20:23:07.953472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.460 BaseBdev1 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.460 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.461 [ 00:10:14.461 { 00:10:14.461 "name": "BaseBdev1", 00:10:14.461 "aliases": [ 00:10:14.461 "e3d20b6e-c95a-4705-8c6a-5f1147ce2973" 00:10:14.461 ], 00:10:14.461 "product_name": "Malloc disk", 00:10:14.461 "block_size": 512, 00:10:14.461 "num_blocks": 65536, 00:10:14.461 "uuid": "e3d20b6e-c95a-4705-8c6a-5f1147ce2973", 00:10:14.461 "assigned_rate_limits": { 00:10:14.461 "rw_ios_per_sec": 0, 00:10:14.461 "rw_mbytes_per_sec": 0, 00:10:14.461 "r_mbytes_per_sec": 0, 00:10:14.461 "w_mbytes_per_sec": 0 00:10:14.461 }, 00:10:14.461 "claimed": true, 00:10:14.461 "claim_type": "exclusive_write", 00:10:14.461 "zoned": false, 00:10:14.461 "supported_io_types": { 00:10:14.461 "read": true, 00:10:14.461 "write": true, 00:10:14.461 "unmap": true, 00:10:14.461 "flush": true, 00:10:14.461 "reset": true, 00:10:14.461 "nvme_admin": false, 00:10:14.461 "nvme_io": false, 00:10:14.461 "nvme_io_md": false, 00:10:14.461 "write_zeroes": true, 00:10:14.461 "zcopy": true, 00:10:14.461 "get_zone_info": false, 00:10:14.461 "zone_management": false, 00:10:14.461 "zone_append": false, 00:10:14.461 "compare": false, 00:10:14.461 "compare_and_write": false, 00:10:14.461 "abort": true, 00:10:14.461 "seek_hole": false, 00:10:14.461 "seek_data": false, 00:10:14.461 "copy": true, 00:10:14.461 "nvme_iov_md": false 00:10:14.461 }, 00:10:14.461 "memory_domains": [ 00:10:14.461 { 00:10:14.461 "dma_device_id": "system", 00:10:14.461 "dma_device_type": 1 00:10:14.461 }, 00:10:14.461 { 00:10:14.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.461 "dma_device_type": 2 00:10:14.461 } 00:10:14.461 ], 00:10:14.461 "driver_specific": {} 00:10:14.461 } 00:10:14.461 ] 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.461 20:23:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.461 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.719 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.720 "name": "Existed_Raid", 00:10:14.720 "uuid": "92a4be13-934d-4224-933b-c768e2ec6c02", 00:10:14.720 "strip_size_kb": 0, 00:10:14.720 "state": "configuring", 00:10:14.720 "raid_level": "raid1", 00:10:14.720 "superblock": true, 00:10:14.720 "num_base_bdevs": 3, 00:10:14.720 "num_base_bdevs_discovered": 1, 00:10:14.720 "num_base_bdevs_operational": 3, 00:10:14.720 "base_bdevs_list": [ 00:10:14.720 { 00:10:14.720 "name": "BaseBdev1", 00:10:14.720 "uuid": "e3d20b6e-c95a-4705-8c6a-5f1147ce2973", 00:10:14.720 "is_configured": true, 00:10:14.720 "data_offset": 2048, 00:10:14.720 "data_size": 63488 00:10:14.720 }, 00:10:14.720 { 00:10:14.720 "name": "BaseBdev2", 00:10:14.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.720 "is_configured": false, 00:10:14.720 "data_offset": 0, 00:10:14.720 "data_size": 0 00:10:14.720 }, 00:10:14.720 { 00:10:14.720 "name": "BaseBdev3", 00:10:14.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.720 "is_configured": false, 00:10:14.720 "data_offset": 0, 00:10:14.720 "data_size": 0 00:10:14.720 } 00:10:14.720 ] 00:10:14.720 }' 00:10:14.720 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.720 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.978 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.978 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.978 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 [2024-11-26 20:23:08.464738] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.979 [2024-11-26 20:23:08.464875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 [2024-11-26 20:23:08.476831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.979 [2024-11-26 20:23:08.479121] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.979 [2024-11-26 20:23:08.479223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.979 [2024-11-26 20:23:08.479268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.979 [2024-11-26 20:23:08.479323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.979 "name": "Existed_Raid", 00:10:14.979 "uuid": "11e7c85e-7914-4193-a0f6-03db387e5a60", 00:10:14.979 "strip_size_kb": 0, 00:10:14.979 "state": "configuring", 00:10:14.979 "raid_level": "raid1", 00:10:14.979 "superblock": true, 00:10:14.979 "num_base_bdevs": 3, 00:10:14.979 "num_base_bdevs_discovered": 1, 00:10:14.979 "num_base_bdevs_operational": 3, 00:10:14.979 "base_bdevs_list": [ 00:10:14.979 { 00:10:14.979 "name": "BaseBdev1", 00:10:14.979 "uuid": "e3d20b6e-c95a-4705-8c6a-5f1147ce2973", 00:10:14.979 "is_configured": true, 00:10:14.979 "data_offset": 2048, 00:10:14.979 "data_size": 63488 00:10:14.979 }, 00:10:14.979 { 00:10:14.979 "name": "BaseBdev2", 00:10:14.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.979 "is_configured": false, 00:10:14.979 "data_offset": 0, 00:10:14.979 "data_size": 0 00:10:14.979 }, 00:10:14.979 { 00:10:14.979 "name": "BaseBdev3", 00:10:14.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.979 "is_configured": false, 00:10:14.979 "data_offset": 0, 00:10:14.979 "data_size": 0 00:10:14.979 } 00:10:14.979 ] 00:10:14.979 }' 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.979 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 [2024-11-26 20:23:08.974015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.547 BaseBdev2 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.547 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.547 [ 00:10:15.547 { 00:10:15.547 "name": "BaseBdev2", 00:10:15.547 "aliases": [ 00:10:15.547 "b2456f74-0ae4-4511-81a2-5ac704575bda" 00:10:15.547 ], 00:10:15.547 "product_name": "Malloc disk", 00:10:15.547 "block_size": 512, 00:10:15.547 "num_blocks": 65536, 00:10:15.547 "uuid": "b2456f74-0ae4-4511-81a2-5ac704575bda", 00:10:15.547 "assigned_rate_limits": { 00:10:15.547 "rw_ios_per_sec": 0, 00:10:15.547 "rw_mbytes_per_sec": 0, 00:10:15.547 "r_mbytes_per_sec": 0, 00:10:15.547 "w_mbytes_per_sec": 0 00:10:15.547 }, 00:10:15.547 "claimed": true, 00:10:15.547 "claim_type": "exclusive_write", 00:10:15.547 "zoned": false, 00:10:15.547 "supported_io_types": { 00:10:15.547 "read": true, 00:10:15.547 "write": true, 00:10:15.547 "unmap": true, 00:10:15.548 "flush": true, 00:10:15.548 "reset": true, 00:10:15.548 "nvme_admin": false, 00:10:15.548 "nvme_io": false, 00:10:15.548 "nvme_io_md": false, 00:10:15.548 "write_zeroes": true, 00:10:15.548 "zcopy": true, 00:10:15.548 "get_zone_info": false, 00:10:15.548 "zone_management": false, 00:10:15.548 "zone_append": false, 00:10:15.548 "compare": false, 00:10:15.548 "compare_and_write": false, 00:10:15.548 "abort": true, 00:10:15.548 "seek_hole": false, 00:10:15.548 "seek_data": false, 00:10:15.548 "copy": true, 00:10:15.548 "nvme_iov_md": false 00:10:15.548 }, 00:10:15.548 "memory_domains": [ 00:10:15.548 { 00:10:15.548 "dma_device_id": "system", 00:10:15.548 "dma_device_type": 1 00:10:15.548 }, 00:10:15.548 { 00:10:15.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.548 "dma_device_type": 2 00:10:15.548 } 00:10:15.548 ], 00:10:15.548 "driver_specific": {} 00:10:15.548 } 00:10:15.548 ] 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.548 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.548 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.548 20:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.548 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.548 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.548 "name": "Existed_Raid", 00:10:15.548 "uuid": "11e7c85e-7914-4193-a0f6-03db387e5a60", 00:10:15.548 "strip_size_kb": 0, 00:10:15.548 "state": "configuring", 00:10:15.548 "raid_level": "raid1", 00:10:15.548 "superblock": true, 00:10:15.548 "num_base_bdevs": 3, 00:10:15.548 "num_base_bdevs_discovered": 2, 00:10:15.548 "num_base_bdevs_operational": 3, 00:10:15.548 "base_bdevs_list": [ 00:10:15.548 { 00:10:15.548 "name": "BaseBdev1", 00:10:15.548 "uuid": "e3d20b6e-c95a-4705-8c6a-5f1147ce2973", 00:10:15.548 "is_configured": true, 00:10:15.548 "data_offset": 2048, 00:10:15.548 "data_size": 63488 00:10:15.548 }, 00:10:15.548 { 00:10:15.548 "name": "BaseBdev2", 00:10:15.548 "uuid": "b2456f74-0ae4-4511-81a2-5ac704575bda", 00:10:15.548 "is_configured": true, 00:10:15.548 "data_offset": 2048, 00:10:15.548 "data_size": 63488 00:10:15.548 }, 00:10:15.548 { 00:10:15.548 "name": "BaseBdev3", 00:10:15.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.548 "is_configured": false, 00:10:15.548 "data_offset": 0, 00:10:15.548 "data_size": 0 00:10:15.548 } 00:10:15.548 ] 00:10:15.548 }' 00:10:15.548 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.548 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.116 [2024-11-26 20:23:09.462735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.116 [2024-11-26 20:23:09.463076] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:16.116 [2024-11-26 20:23:09.463140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.116 BaseBdev3 00:10:16.116 [2024-11-26 20:23:09.463523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:16.116 [2024-11-26 20:23:09.463757] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:16.116 [2024-11-26 20:23:09.463809] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.116 [2024-11-26 20:23:09.464015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.116 [ 00:10:16.116 { 00:10:16.116 "name": "BaseBdev3", 00:10:16.116 "aliases": [ 00:10:16.116 "27e42ee2-b932-497d-831e-efb049ba8b31" 00:10:16.116 ], 00:10:16.116 "product_name": "Malloc disk", 00:10:16.116 "block_size": 512, 00:10:16.116 "num_blocks": 65536, 00:10:16.116 "uuid": "27e42ee2-b932-497d-831e-efb049ba8b31", 00:10:16.116 "assigned_rate_limits": { 00:10:16.116 "rw_ios_per_sec": 0, 00:10:16.116 "rw_mbytes_per_sec": 0, 00:10:16.116 "r_mbytes_per_sec": 0, 00:10:16.116 "w_mbytes_per_sec": 0 00:10:16.116 }, 00:10:16.116 "claimed": true, 00:10:16.116 "claim_type": "exclusive_write", 00:10:16.116 "zoned": false, 00:10:16.116 "supported_io_types": { 00:10:16.116 "read": true, 00:10:16.116 "write": true, 00:10:16.116 "unmap": true, 00:10:16.116 "flush": true, 00:10:16.116 "reset": true, 00:10:16.116 "nvme_admin": false, 00:10:16.116 "nvme_io": false, 00:10:16.116 "nvme_io_md": false, 00:10:16.116 "write_zeroes": true, 00:10:16.116 "zcopy": true, 00:10:16.116 "get_zone_info": false, 00:10:16.116 "zone_management": false, 00:10:16.116 "zone_append": false, 00:10:16.116 "compare": false, 00:10:16.116 "compare_and_write": false, 00:10:16.116 "abort": true, 00:10:16.116 "seek_hole": false, 00:10:16.116 "seek_data": false, 00:10:16.116 "copy": true, 00:10:16.116 "nvme_iov_md": false 00:10:16.116 }, 00:10:16.116 "memory_domains": [ 00:10:16.116 { 00:10:16.116 "dma_device_id": "system", 00:10:16.116 "dma_device_type": 1 00:10:16.116 }, 00:10:16.116 { 00:10:16.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.116 "dma_device_type": 2 00:10:16.116 } 00:10:16.116 ], 00:10:16.116 "driver_specific": {} 00:10:16.116 } 00:10:16.116 ] 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.116 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.117 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.117 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.117 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.117 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.117 "name": "Existed_Raid", 00:10:16.117 "uuid": "11e7c85e-7914-4193-a0f6-03db387e5a60", 00:10:16.117 "strip_size_kb": 0, 00:10:16.117 "state": "online", 00:10:16.117 "raid_level": "raid1", 00:10:16.117 "superblock": true, 00:10:16.117 "num_base_bdevs": 3, 00:10:16.117 "num_base_bdevs_discovered": 3, 00:10:16.117 "num_base_bdevs_operational": 3, 00:10:16.117 "base_bdevs_list": [ 00:10:16.117 { 00:10:16.117 "name": "BaseBdev1", 00:10:16.117 "uuid": "e3d20b6e-c95a-4705-8c6a-5f1147ce2973", 00:10:16.117 "is_configured": true, 00:10:16.117 "data_offset": 2048, 00:10:16.117 "data_size": 63488 00:10:16.117 }, 00:10:16.117 { 00:10:16.117 "name": "BaseBdev2", 00:10:16.117 "uuid": "b2456f74-0ae4-4511-81a2-5ac704575bda", 00:10:16.117 "is_configured": true, 00:10:16.117 "data_offset": 2048, 00:10:16.117 "data_size": 63488 00:10:16.117 }, 00:10:16.117 { 00:10:16.117 "name": "BaseBdev3", 00:10:16.117 "uuid": "27e42ee2-b932-497d-831e-efb049ba8b31", 00:10:16.117 "is_configured": true, 00:10:16.117 "data_offset": 2048, 00:10:16.117 "data_size": 63488 00:10:16.117 } 00:10:16.117 ] 00:10:16.117 }' 00:10:16.117 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.117 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.686 [2024-11-26 20:23:09.978339] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.686 20:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.686 "name": "Existed_Raid", 00:10:16.686 "aliases": [ 00:10:16.686 "11e7c85e-7914-4193-a0f6-03db387e5a60" 00:10:16.686 ], 00:10:16.686 "product_name": "Raid Volume", 00:10:16.686 "block_size": 512, 00:10:16.686 "num_blocks": 63488, 00:10:16.686 "uuid": "11e7c85e-7914-4193-a0f6-03db387e5a60", 00:10:16.686 "assigned_rate_limits": { 00:10:16.686 "rw_ios_per_sec": 0, 00:10:16.686 "rw_mbytes_per_sec": 0, 00:10:16.686 "r_mbytes_per_sec": 0, 00:10:16.686 "w_mbytes_per_sec": 0 00:10:16.686 }, 00:10:16.686 "claimed": false, 00:10:16.686 "zoned": false, 00:10:16.686 "supported_io_types": { 00:10:16.686 "read": true, 00:10:16.686 "write": true, 00:10:16.686 "unmap": false, 00:10:16.686 "flush": false, 00:10:16.686 "reset": true, 00:10:16.686 "nvme_admin": false, 00:10:16.686 "nvme_io": false, 00:10:16.686 "nvme_io_md": false, 00:10:16.686 "write_zeroes": true, 00:10:16.686 "zcopy": false, 00:10:16.686 "get_zone_info": false, 00:10:16.686 "zone_management": false, 00:10:16.686 "zone_append": false, 00:10:16.686 "compare": false, 00:10:16.686 "compare_and_write": false, 00:10:16.686 "abort": false, 00:10:16.686 "seek_hole": false, 00:10:16.686 "seek_data": false, 00:10:16.686 "copy": false, 00:10:16.686 "nvme_iov_md": false 00:10:16.686 }, 00:10:16.686 "memory_domains": [ 00:10:16.686 { 00:10:16.686 "dma_device_id": "system", 00:10:16.686 "dma_device_type": 1 00:10:16.686 }, 00:10:16.686 { 00:10:16.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.686 "dma_device_type": 2 00:10:16.686 }, 00:10:16.686 { 00:10:16.686 "dma_device_id": "system", 00:10:16.686 "dma_device_type": 1 00:10:16.686 }, 00:10:16.686 { 00:10:16.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.686 "dma_device_type": 2 00:10:16.686 }, 00:10:16.686 { 00:10:16.686 "dma_device_id": "system", 00:10:16.686 "dma_device_type": 1 00:10:16.686 }, 00:10:16.686 { 00:10:16.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.686 "dma_device_type": 2 00:10:16.686 } 00:10:16.686 ], 00:10:16.686 "driver_specific": { 00:10:16.686 "raid": { 00:10:16.686 "uuid": "11e7c85e-7914-4193-a0f6-03db387e5a60", 00:10:16.686 "strip_size_kb": 0, 00:10:16.686 "state": "online", 00:10:16.686 "raid_level": "raid1", 00:10:16.686 "superblock": true, 00:10:16.686 "num_base_bdevs": 3, 00:10:16.686 "num_base_bdevs_discovered": 3, 00:10:16.686 "num_base_bdevs_operational": 3, 00:10:16.686 "base_bdevs_list": [ 00:10:16.686 { 00:10:16.686 "name": "BaseBdev1", 00:10:16.686 "uuid": "e3d20b6e-c95a-4705-8c6a-5f1147ce2973", 00:10:16.686 "is_configured": true, 00:10:16.686 "data_offset": 2048, 00:10:16.686 "data_size": 63488 00:10:16.686 }, 00:10:16.686 { 00:10:16.686 "name": "BaseBdev2", 00:10:16.686 "uuid": "b2456f74-0ae4-4511-81a2-5ac704575bda", 00:10:16.686 "is_configured": true, 00:10:16.686 "data_offset": 2048, 00:10:16.686 "data_size": 63488 00:10:16.686 }, 00:10:16.686 { 00:10:16.686 "name": "BaseBdev3", 00:10:16.686 "uuid": "27e42ee2-b932-497d-831e-efb049ba8b31", 00:10:16.686 "is_configured": true, 00:10:16.686 "data_offset": 2048, 00:10:16.686 "data_size": 63488 00:10:16.686 } 00:10:16.686 ] 00:10:16.686 } 00:10:16.686 } 00:10:16.686 }' 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:16.686 BaseBdev2 00:10:16.686 BaseBdev3' 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.686 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.687 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.946 [2024-11-26 20:23:10.249654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.946 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.946 "name": "Existed_Raid", 00:10:16.946 "uuid": "11e7c85e-7914-4193-a0f6-03db387e5a60", 00:10:16.946 "strip_size_kb": 0, 00:10:16.946 "state": "online", 00:10:16.946 "raid_level": "raid1", 00:10:16.946 "superblock": true, 00:10:16.946 "num_base_bdevs": 3, 00:10:16.946 "num_base_bdevs_discovered": 2, 00:10:16.946 "num_base_bdevs_operational": 2, 00:10:16.947 "base_bdevs_list": [ 00:10:16.947 { 00:10:16.947 "name": null, 00:10:16.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.947 "is_configured": false, 00:10:16.947 "data_offset": 0, 00:10:16.947 "data_size": 63488 00:10:16.947 }, 00:10:16.947 { 00:10:16.947 "name": "BaseBdev2", 00:10:16.947 "uuid": "b2456f74-0ae4-4511-81a2-5ac704575bda", 00:10:16.947 "is_configured": true, 00:10:16.947 "data_offset": 2048, 00:10:16.947 "data_size": 63488 00:10:16.947 }, 00:10:16.947 { 00:10:16.947 "name": "BaseBdev3", 00:10:16.947 "uuid": "27e42ee2-b932-497d-831e-efb049ba8b31", 00:10:16.947 "is_configured": true, 00:10:16.947 "data_offset": 2048, 00:10:16.947 "data_size": 63488 00:10:16.947 } 00:10:16.947 ] 00:10:16.947 }' 00:10:16.947 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.947 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.206 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.206 [2024-11-26 20:23:10.736010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.466 [2024-11-26 20:23:10.801579] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.466 [2024-11-26 20:23:10.801787] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.466 [2024-11-26 20:23:10.823953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.466 [2024-11-26 20:23:10.824012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.466 [2024-11-26 20:23:10.824028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.466 BaseBdev2 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.466 [ 00:10:17.466 { 00:10:17.466 "name": "BaseBdev2", 00:10:17.466 "aliases": [ 00:10:17.466 "04bb9f97-d562-47ec-8725-91626cf28af2" 00:10:17.466 ], 00:10:17.466 "product_name": "Malloc disk", 00:10:17.466 "block_size": 512, 00:10:17.466 "num_blocks": 65536, 00:10:17.466 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:17.466 "assigned_rate_limits": { 00:10:17.466 "rw_ios_per_sec": 0, 00:10:17.466 "rw_mbytes_per_sec": 0, 00:10:17.466 "r_mbytes_per_sec": 0, 00:10:17.466 "w_mbytes_per_sec": 0 00:10:17.466 }, 00:10:17.466 "claimed": false, 00:10:17.466 "zoned": false, 00:10:17.466 "supported_io_types": { 00:10:17.466 "read": true, 00:10:17.466 "write": true, 00:10:17.466 "unmap": true, 00:10:17.466 "flush": true, 00:10:17.466 "reset": true, 00:10:17.466 "nvme_admin": false, 00:10:17.466 "nvme_io": false, 00:10:17.466 "nvme_io_md": false, 00:10:17.466 "write_zeroes": true, 00:10:17.466 "zcopy": true, 00:10:17.466 "get_zone_info": false, 00:10:17.466 "zone_management": false, 00:10:17.466 "zone_append": false, 00:10:17.466 "compare": false, 00:10:17.466 "compare_and_write": false, 00:10:17.466 "abort": true, 00:10:17.466 "seek_hole": false, 00:10:17.466 "seek_data": false, 00:10:17.466 "copy": true, 00:10:17.466 "nvme_iov_md": false 00:10:17.466 }, 00:10:17.466 "memory_domains": [ 00:10:17.466 { 00:10:17.466 "dma_device_id": "system", 00:10:17.466 "dma_device_type": 1 00:10:17.466 }, 00:10:17.466 { 00:10:17.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.466 "dma_device_type": 2 00:10:17.466 } 00:10:17.466 ], 00:10:17.466 "driver_specific": {} 00:10:17.466 } 00:10:17.466 ] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.466 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.467 BaseBdev3 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.467 [ 00:10:17.467 { 00:10:17.467 "name": "BaseBdev3", 00:10:17.467 "aliases": [ 00:10:17.467 "55f76250-ceb6-44d7-ac89-f160fa5edfb0" 00:10:17.467 ], 00:10:17.467 "product_name": "Malloc disk", 00:10:17.467 "block_size": 512, 00:10:17.467 "num_blocks": 65536, 00:10:17.467 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:17.467 "assigned_rate_limits": { 00:10:17.467 "rw_ios_per_sec": 0, 00:10:17.467 "rw_mbytes_per_sec": 0, 00:10:17.467 "r_mbytes_per_sec": 0, 00:10:17.467 "w_mbytes_per_sec": 0 00:10:17.467 }, 00:10:17.467 "claimed": false, 00:10:17.467 "zoned": false, 00:10:17.467 "supported_io_types": { 00:10:17.467 "read": true, 00:10:17.467 "write": true, 00:10:17.467 "unmap": true, 00:10:17.467 "flush": true, 00:10:17.467 "reset": true, 00:10:17.467 "nvme_admin": false, 00:10:17.467 "nvme_io": false, 00:10:17.467 "nvme_io_md": false, 00:10:17.467 "write_zeroes": true, 00:10:17.467 "zcopy": true, 00:10:17.467 "get_zone_info": false, 00:10:17.467 "zone_management": false, 00:10:17.467 "zone_append": false, 00:10:17.467 "compare": false, 00:10:17.467 "compare_and_write": false, 00:10:17.467 "abort": true, 00:10:17.467 "seek_hole": false, 00:10:17.467 "seek_data": false, 00:10:17.467 "copy": true, 00:10:17.467 "nvme_iov_md": false 00:10:17.467 }, 00:10:17.467 "memory_domains": [ 00:10:17.467 { 00:10:17.467 "dma_device_id": "system", 00:10:17.467 "dma_device_type": 1 00:10:17.467 }, 00:10:17.467 { 00:10:17.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.467 "dma_device_type": 2 00:10:17.467 } 00:10:17.467 ], 00:10:17.467 "driver_specific": {} 00:10:17.467 } 00:10:17.467 ] 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.467 [2024-11-26 20:23:10.970576] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.467 [2024-11-26 20:23:10.970704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.467 [2024-11-26 20:23:10.970738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.467 [2024-11-26 20:23:10.972907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.467 20:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.726 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.726 "name": "Existed_Raid", 00:10:17.726 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:17.726 "strip_size_kb": 0, 00:10:17.726 "state": "configuring", 00:10:17.726 "raid_level": "raid1", 00:10:17.726 "superblock": true, 00:10:17.726 "num_base_bdevs": 3, 00:10:17.726 "num_base_bdevs_discovered": 2, 00:10:17.726 "num_base_bdevs_operational": 3, 00:10:17.726 "base_bdevs_list": [ 00:10:17.726 { 00:10:17.726 "name": "BaseBdev1", 00:10:17.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.726 "is_configured": false, 00:10:17.726 "data_offset": 0, 00:10:17.726 "data_size": 0 00:10:17.726 }, 00:10:17.726 { 00:10:17.726 "name": "BaseBdev2", 00:10:17.726 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:17.726 "is_configured": true, 00:10:17.726 "data_offset": 2048, 00:10:17.726 "data_size": 63488 00:10:17.726 }, 00:10:17.726 { 00:10:17.726 "name": "BaseBdev3", 00:10:17.726 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:17.726 "is_configured": true, 00:10:17.726 "data_offset": 2048, 00:10:17.726 "data_size": 63488 00:10:17.726 } 00:10:17.726 ] 00:10:17.726 }' 00:10:17.726 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.726 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.985 [2024-11-26 20:23:11.461760] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.985 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.986 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.986 "name": "Existed_Raid", 00:10:17.986 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:17.986 "strip_size_kb": 0, 00:10:17.986 "state": "configuring", 00:10:17.986 "raid_level": "raid1", 00:10:17.986 "superblock": true, 00:10:17.986 "num_base_bdevs": 3, 00:10:17.986 "num_base_bdevs_discovered": 1, 00:10:17.986 "num_base_bdevs_operational": 3, 00:10:17.986 "base_bdevs_list": [ 00:10:17.986 { 00:10:17.986 "name": "BaseBdev1", 00:10:17.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.986 "is_configured": false, 00:10:17.986 "data_offset": 0, 00:10:17.986 "data_size": 0 00:10:17.986 }, 00:10:17.986 { 00:10:17.986 "name": null, 00:10:17.986 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:17.986 "is_configured": false, 00:10:17.986 "data_offset": 0, 00:10:17.986 "data_size": 63488 00:10:17.986 }, 00:10:17.986 { 00:10:17.986 "name": "BaseBdev3", 00:10:17.986 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:17.986 "is_configured": true, 00:10:17.986 "data_offset": 2048, 00:10:17.986 "data_size": 63488 00:10:17.986 } 00:10:17.986 ] 00:10:17.986 }' 00:10:17.986 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.986 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.552 [2024-11-26 20:23:11.997910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.552 BaseBdev1 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.552 20:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.552 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.552 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.552 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.552 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.552 [ 00:10:18.552 { 00:10:18.552 "name": "BaseBdev1", 00:10:18.552 "aliases": [ 00:10:18.552 "82f0d523-65e5-46f2-9f04-bc896481796c" 00:10:18.552 ], 00:10:18.552 "product_name": "Malloc disk", 00:10:18.552 "block_size": 512, 00:10:18.552 "num_blocks": 65536, 00:10:18.552 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:18.552 "assigned_rate_limits": { 00:10:18.553 "rw_ios_per_sec": 0, 00:10:18.553 "rw_mbytes_per_sec": 0, 00:10:18.553 "r_mbytes_per_sec": 0, 00:10:18.553 "w_mbytes_per_sec": 0 00:10:18.553 }, 00:10:18.553 "claimed": true, 00:10:18.553 "claim_type": "exclusive_write", 00:10:18.553 "zoned": false, 00:10:18.553 "supported_io_types": { 00:10:18.553 "read": true, 00:10:18.553 "write": true, 00:10:18.553 "unmap": true, 00:10:18.553 "flush": true, 00:10:18.553 "reset": true, 00:10:18.553 "nvme_admin": false, 00:10:18.553 "nvme_io": false, 00:10:18.553 "nvme_io_md": false, 00:10:18.553 "write_zeroes": true, 00:10:18.553 "zcopy": true, 00:10:18.553 "get_zone_info": false, 00:10:18.553 "zone_management": false, 00:10:18.553 "zone_append": false, 00:10:18.553 "compare": false, 00:10:18.553 "compare_and_write": false, 00:10:18.553 "abort": true, 00:10:18.553 "seek_hole": false, 00:10:18.553 "seek_data": false, 00:10:18.553 "copy": true, 00:10:18.553 "nvme_iov_md": false 00:10:18.553 }, 00:10:18.553 "memory_domains": [ 00:10:18.553 { 00:10:18.553 "dma_device_id": "system", 00:10:18.553 "dma_device_type": 1 00:10:18.553 }, 00:10:18.553 { 00:10:18.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.553 "dma_device_type": 2 00:10:18.553 } 00:10:18.553 ], 00:10:18.553 "driver_specific": {} 00:10:18.553 } 00:10:18.553 ] 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.553 "name": "Existed_Raid", 00:10:18.553 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:18.553 "strip_size_kb": 0, 00:10:18.553 "state": "configuring", 00:10:18.553 "raid_level": "raid1", 00:10:18.553 "superblock": true, 00:10:18.553 "num_base_bdevs": 3, 00:10:18.553 "num_base_bdevs_discovered": 2, 00:10:18.553 "num_base_bdevs_operational": 3, 00:10:18.553 "base_bdevs_list": [ 00:10:18.553 { 00:10:18.553 "name": "BaseBdev1", 00:10:18.553 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:18.553 "is_configured": true, 00:10:18.553 "data_offset": 2048, 00:10:18.553 "data_size": 63488 00:10:18.553 }, 00:10:18.553 { 00:10:18.553 "name": null, 00:10:18.553 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:18.553 "is_configured": false, 00:10:18.553 "data_offset": 0, 00:10:18.553 "data_size": 63488 00:10:18.553 }, 00:10:18.553 { 00:10:18.553 "name": "BaseBdev3", 00:10:18.553 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:18.553 "is_configured": true, 00:10:18.553 "data_offset": 2048, 00:10:18.553 "data_size": 63488 00:10:18.553 } 00:10:18.553 ] 00:10:18.553 }' 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.553 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.119 [2024-11-26 20:23:12.541128] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.119 "name": "Existed_Raid", 00:10:19.119 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:19.119 "strip_size_kb": 0, 00:10:19.119 "state": "configuring", 00:10:19.119 "raid_level": "raid1", 00:10:19.119 "superblock": true, 00:10:19.119 "num_base_bdevs": 3, 00:10:19.119 "num_base_bdevs_discovered": 1, 00:10:19.119 "num_base_bdevs_operational": 3, 00:10:19.119 "base_bdevs_list": [ 00:10:19.119 { 00:10:19.119 "name": "BaseBdev1", 00:10:19.119 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:19.119 "is_configured": true, 00:10:19.119 "data_offset": 2048, 00:10:19.119 "data_size": 63488 00:10:19.119 }, 00:10:19.119 { 00:10:19.119 "name": null, 00:10:19.119 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:19.119 "is_configured": false, 00:10:19.119 "data_offset": 0, 00:10:19.119 "data_size": 63488 00:10:19.119 }, 00:10:19.119 { 00:10:19.119 "name": null, 00:10:19.119 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:19.119 "is_configured": false, 00:10:19.119 "data_offset": 0, 00:10:19.119 "data_size": 63488 00:10:19.119 } 00:10:19.119 ] 00:10:19.119 }' 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.119 20:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 [2024-11-26 20:23:13.060488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.688 "name": "Existed_Raid", 00:10:19.688 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:19.688 "strip_size_kb": 0, 00:10:19.688 "state": "configuring", 00:10:19.688 "raid_level": "raid1", 00:10:19.688 "superblock": true, 00:10:19.688 "num_base_bdevs": 3, 00:10:19.688 "num_base_bdevs_discovered": 2, 00:10:19.688 "num_base_bdevs_operational": 3, 00:10:19.688 "base_bdevs_list": [ 00:10:19.688 { 00:10:19.688 "name": "BaseBdev1", 00:10:19.688 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:19.688 "is_configured": true, 00:10:19.688 "data_offset": 2048, 00:10:19.688 "data_size": 63488 00:10:19.688 }, 00:10:19.688 { 00:10:19.688 "name": null, 00:10:19.688 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:19.688 "is_configured": false, 00:10:19.688 "data_offset": 0, 00:10:19.688 "data_size": 63488 00:10:19.688 }, 00:10:19.688 { 00:10:19.688 "name": "BaseBdev3", 00:10:19.688 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:19.688 "is_configured": true, 00:10:19.688 "data_offset": 2048, 00:10:19.688 "data_size": 63488 00:10:19.688 } 00:10:19.688 ] 00:10:19.688 }' 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.688 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.014 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.014 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.014 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.014 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.014 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.274 [2024-11-26 20:23:13.567652] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.274 "name": "Existed_Raid", 00:10:20.274 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:20.274 "strip_size_kb": 0, 00:10:20.274 "state": "configuring", 00:10:20.274 "raid_level": "raid1", 00:10:20.274 "superblock": true, 00:10:20.274 "num_base_bdevs": 3, 00:10:20.274 "num_base_bdevs_discovered": 1, 00:10:20.274 "num_base_bdevs_operational": 3, 00:10:20.274 "base_bdevs_list": [ 00:10:20.274 { 00:10:20.274 "name": null, 00:10:20.274 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:20.274 "is_configured": false, 00:10:20.274 "data_offset": 0, 00:10:20.274 "data_size": 63488 00:10:20.274 }, 00:10:20.274 { 00:10:20.274 "name": null, 00:10:20.274 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:20.274 "is_configured": false, 00:10:20.274 "data_offset": 0, 00:10:20.274 "data_size": 63488 00:10:20.274 }, 00:10:20.274 { 00:10:20.274 "name": "BaseBdev3", 00:10:20.274 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:20.274 "is_configured": true, 00:10:20.274 "data_offset": 2048, 00:10:20.274 "data_size": 63488 00:10:20.274 } 00:10:20.274 ] 00:10:20.274 }' 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.274 20:23:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.533 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.533 [2024-11-26 20:23:14.080630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.793 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.793 "name": "Existed_Raid", 00:10:20.794 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:20.794 "strip_size_kb": 0, 00:10:20.794 "state": "configuring", 00:10:20.794 "raid_level": "raid1", 00:10:20.794 "superblock": true, 00:10:20.794 "num_base_bdevs": 3, 00:10:20.794 "num_base_bdevs_discovered": 2, 00:10:20.794 "num_base_bdevs_operational": 3, 00:10:20.794 "base_bdevs_list": [ 00:10:20.794 { 00:10:20.794 "name": null, 00:10:20.794 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:20.794 "is_configured": false, 00:10:20.794 "data_offset": 0, 00:10:20.794 "data_size": 63488 00:10:20.794 }, 00:10:20.794 { 00:10:20.794 "name": "BaseBdev2", 00:10:20.794 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:20.794 "is_configured": true, 00:10:20.794 "data_offset": 2048, 00:10:20.794 "data_size": 63488 00:10:20.794 }, 00:10:20.794 { 00:10:20.794 "name": "BaseBdev3", 00:10:20.794 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:20.794 "is_configured": true, 00:10:20.794 "data_offset": 2048, 00:10:20.794 "data_size": 63488 00:10:20.794 } 00:10:20.794 ] 00:10:20.794 }' 00:10:20.794 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.794 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.052 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.053 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.053 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.053 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82f0d523-65e5-46f2-9f04-bc896481796c 00:10:21.053 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.053 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.311 [2024-11-26 20:23:14.608807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:21.311 [2024-11-26 20:23:14.609107] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:21.311 [2024-11-26 20:23:14.609126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.311 [2024-11-26 20:23:14.609405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:21.311 NewBaseBdev 00:10:21.311 [2024-11-26 20:23:14.609551] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:21.311 [2024-11-26 20:23:14.609567] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:21.311 [2024-11-26 20:23:14.609685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.311 [ 00:10:21.311 { 00:10:21.311 "name": "NewBaseBdev", 00:10:21.311 "aliases": [ 00:10:21.311 "82f0d523-65e5-46f2-9f04-bc896481796c" 00:10:21.311 ], 00:10:21.311 "product_name": "Malloc disk", 00:10:21.311 "block_size": 512, 00:10:21.311 "num_blocks": 65536, 00:10:21.311 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:21.311 "assigned_rate_limits": { 00:10:21.311 "rw_ios_per_sec": 0, 00:10:21.311 "rw_mbytes_per_sec": 0, 00:10:21.311 "r_mbytes_per_sec": 0, 00:10:21.311 "w_mbytes_per_sec": 0 00:10:21.311 }, 00:10:21.311 "claimed": true, 00:10:21.311 "claim_type": "exclusive_write", 00:10:21.311 "zoned": false, 00:10:21.311 "supported_io_types": { 00:10:21.311 "read": true, 00:10:21.311 "write": true, 00:10:21.311 "unmap": true, 00:10:21.311 "flush": true, 00:10:21.311 "reset": true, 00:10:21.311 "nvme_admin": false, 00:10:21.311 "nvme_io": false, 00:10:21.311 "nvme_io_md": false, 00:10:21.311 "write_zeroes": true, 00:10:21.311 "zcopy": true, 00:10:21.311 "get_zone_info": false, 00:10:21.311 "zone_management": false, 00:10:21.311 "zone_append": false, 00:10:21.311 "compare": false, 00:10:21.311 "compare_and_write": false, 00:10:21.311 "abort": true, 00:10:21.311 "seek_hole": false, 00:10:21.311 "seek_data": false, 00:10:21.311 "copy": true, 00:10:21.311 "nvme_iov_md": false 00:10:21.311 }, 00:10:21.311 "memory_domains": [ 00:10:21.311 { 00:10:21.311 "dma_device_id": "system", 00:10:21.311 "dma_device_type": 1 00:10:21.311 }, 00:10:21.311 { 00:10:21.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.311 "dma_device_type": 2 00:10:21.311 } 00:10:21.311 ], 00:10:21.311 "driver_specific": {} 00:10:21.311 } 00:10:21.311 ] 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.311 "name": "Existed_Raid", 00:10:21.311 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:21.311 "strip_size_kb": 0, 00:10:21.311 "state": "online", 00:10:21.311 "raid_level": "raid1", 00:10:21.311 "superblock": true, 00:10:21.311 "num_base_bdevs": 3, 00:10:21.311 "num_base_bdevs_discovered": 3, 00:10:21.311 "num_base_bdevs_operational": 3, 00:10:21.311 "base_bdevs_list": [ 00:10:21.311 { 00:10:21.311 "name": "NewBaseBdev", 00:10:21.311 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:21.311 "is_configured": true, 00:10:21.311 "data_offset": 2048, 00:10:21.311 "data_size": 63488 00:10:21.311 }, 00:10:21.311 { 00:10:21.311 "name": "BaseBdev2", 00:10:21.311 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:21.311 "is_configured": true, 00:10:21.311 "data_offset": 2048, 00:10:21.311 "data_size": 63488 00:10:21.311 }, 00:10:21.311 { 00:10:21.311 "name": "BaseBdev3", 00:10:21.311 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:21.311 "is_configured": true, 00:10:21.311 "data_offset": 2048, 00:10:21.311 "data_size": 63488 00:10:21.311 } 00:10:21.311 ] 00:10:21.311 }' 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.311 20:23:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.570 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.570 [2024-11-26 20:23:15.108475] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.830 "name": "Existed_Raid", 00:10:21.830 "aliases": [ 00:10:21.830 "70f946d2-e2d9-411e-981b-b600ad0b534d" 00:10:21.830 ], 00:10:21.830 "product_name": "Raid Volume", 00:10:21.830 "block_size": 512, 00:10:21.830 "num_blocks": 63488, 00:10:21.830 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:21.830 "assigned_rate_limits": { 00:10:21.830 "rw_ios_per_sec": 0, 00:10:21.830 "rw_mbytes_per_sec": 0, 00:10:21.830 "r_mbytes_per_sec": 0, 00:10:21.830 "w_mbytes_per_sec": 0 00:10:21.830 }, 00:10:21.830 "claimed": false, 00:10:21.830 "zoned": false, 00:10:21.830 "supported_io_types": { 00:10:21.830 "read": true, 00:10:21.830 "write": true, 00:10:21.830 "unmap": false, 00:10:21.830 "flush": false, 00:10:21.830 "reset": true, 00:10:21.830 "nvme_admin": false, 00:10:21.830 "nvme_io": false, 00:10:21.830 "nvme_io_md": false, 00:10:21.830 "write_zeroes": true, 00:10:21.830 "zcopy": false, 00:10:21.830 "get_zone_info": false, 00:10:21.830 "zone_management": false, 00:10:21.830 "zone_append": false, 00:10:21.830 "compare": false, 00:10:21.830 "compare_and_write": false, 00:10:21.830 "abort": false, 00:10:21.830 "seek_hole": false, 00:10:21.830 "seek_data": false, 00:10:21.830 "copy": false, 00:10:21.830 "nvme_iov_md": false 00:10:21.830 }, 00:10:21.830 "memory_domains": [ 00:10:21.830 { 00:10:21.830 "dma_device_id": "system", 00:10:21.830 "dma_device_type": 1 00:10:21.830 }, 00:10:21.830 { 00:10:21.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.830 "dma_device_type": 2 00:10:21.830 }, 00:10:21.830 { 00:10:21.830 "dma_device_id": "system", 00:10:21.830 "dma_device_type": 1 00:10:21.830 }, 00:10:21.830 { 00:10:21.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.830 "dma_device_type": 2 00:10:21.830 }, 00:10:21.830 { 00:10:21.830 "dma_device_id": "system", 00:10:21.830 "dma_device_type": 1 00:10:21.830 }, 00:10:21.830 { 00:10:21.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.830 "dma_device_type": 2 00:10:21.830 } 00:10:21.830 ], 00:10:21.830 "driver_specific": { 00:10:21.830 "raid": { 00:10:21.830 "uuid": "70f946d2-e2d9-411e-981b-b600ad0b534d", 00:10:21.830 "strip_size_kb": 0, 00:10:21.830 "state": "online", 00:10:21.830 "raid_level": "raid1", 00:10:21.830 "superblock": true, 00:10:21.830 "num_base_bdevs": 3, 00:10:21.830 "num_base_bdevs_discovered": 3, 00:10:21.830 "num_base_bdevs_operational": 3, 00:10:21.830 "base_bdevs_list": [ 00:10:21.830 { 00:10:21.830 "name": "NewBaseBdev", 00:10:21.830 "uuid": "82f0d523-65e5-46f2-9f04-bc896481796c", 00:10:21.830 "is_configured": true, 00:10:21.830 "data_offset": 2048, 00:10:21.830 "data_size": 63488 00:10:21.830 }, 00:10:21.830 { 00:10:21.830 "name": "BaseBdev2", 00:10:21.830 "uuid": "04bb9f97-d562-47ec-8725-91626cf28af2", 00:10:21.830 "is_configured": true, 00:10:21.830 "data_offset": 2048, 00:10:21.830 "data_size": 63488 00:10:21.830 }, 00:10:21.830 { 00:10:21.830 "name": "BaseBdev3", 00:10:21.830 "uuid": "55f76250-ceb6-44d7-ac89-f160fa5edfb0", 00:10:21.830 "is_configured": true, 00:10:21.830 "data_offset": 2048, 00:10:21.830 "data_size": 63488 00:10:21.830 } 00:10:21.830 ] 00:10:21.830 } 00:10:21.830 } 00:10:21.830 }' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:21.830 BaseBdev2 00:10:21.830 BaseBdev3' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.830 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.831 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.831 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.831 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.831 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.831 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.831 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.090 [2024-11-26 20:23:15.383710] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.090 [2024-11-26 20:23:15.383803] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.090 [2024-11-26 20:23:15.383949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.090 [2024-11-26 20:23:15.384283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.090 [2024-11-26 20:23:15.384333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79495 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79495 ']' 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79495 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79495 00:10:22.090 killing process with pid 79495 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79495' 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79495 00:10:22.090 [2024-11-26 20:23:15.428857] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.090 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79495 00:10:22.090 [2024-11-26 20:23:15.484508] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.350 20:23:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:22.350 00:10:22.350 real 0m9.379s 00:10:22.350 user 0m15.902s 00:10:22.350 sys 0m1.818s 00:10:22.350 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.350 20:23:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.350 ************************************ 00:10:22.350 END TEST raid_state_function_test_sb 00:10:22.350 ************************************ 00:10:22.350 20:23:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:22.350 20:23:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:22.350 20:23:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.610 20:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.610 ************************************ 00:10:22.610 START TEST raid_superblock_test 00:10:22.610 ************************************ 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80104 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80104 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 80104 ']' 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.610 20:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.610 [2024-11-26 20:23:15.997245] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:22.610 [2024-11-26 20:23:15.997394] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80104 ] 00:10:22.869 [2024-11-26 20:23:16.163997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.869 [2024-11-26 20:23:16.249716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.869 [2024-11-26 20:23:16.329160] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.869 [2024-11-26 20:23:16.329206] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.436 malloc1 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.436 [2024-11-26 20:23:16.948744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.436 [2024-11-26 20:23:16.948829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.436 [2024-11-26 20:23:16.948853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:23.436 [2024-11-26 20:23:16.948872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.436 [2024-11-26 20:23:16.951483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.436 [2024-11-26 20:23:16.951532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.436 pt1 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.436 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.437 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:23.437 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.437 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 malloc2 00:10:23.695 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.695 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.695 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.695 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 [2024-11-26 20:23:16.995158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.695 [2024-11-26 20:23:16.995243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.695 [2024-11-26 20:23:16.995276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:23.695 [2024-11-26 20:23:16.995289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.695 [2024-11-26 20:23:16.998117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.695 [2024-11-26 20:23:16.998171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.695 pt2 00:10:23.695 20:23:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.695 20:23:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 malloc3 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 [2024-11-26 20:23:17.026659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:23.695 [2024-11-26 20:23:17.026799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.695 [2024-11-26 20:23:17.026845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:23.695 [2024-11-26 20:23:17.026891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.695 [2024-11-26 20:23:17.029517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.695 [2024-11-26 20:23:17.029629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:23.695 pt3 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 [2024-11-26 20:23:17.038718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:23.695 [2024-11-26 20:23:17.041070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.695 [2024-11-26 20:23:17.041225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:23.695 [2024-11-26 20:23:17.041442] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:23.695 [2024-11-26 20:23:17.041502] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.695 [2024-11-26 20:23:17.041886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:23.695 [2024-11-26 20:23:17.042119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:23.695 [2024-11-26 20:23:17.042176] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:23.695 [2024-11-26 20:23:17.042392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.695 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.695 "name": "raid_bdev1", 00:10:23.695 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:23.695 "strip_size_kb": 0, 00:10:23.695 "state": "online", 00:10:23.695 "raid_level": "raid1", 00:10:23.695 "superblock": true, 00:10:23.695 "num_base_bdevs": 3, 00:10:23.695 "num_base_bdevs_discovered": 3, 00:10:23.695 "num_base_bdevs_operational": 3, 00:10:23.695 "base_bdevs_list": [ 00:10:23.695 { 00:10:23.695 "name": "pt1", 00:10:23.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.695 "is_configured": true, 00:10:23.695 "data_offset": 2048, 00:10:23.695 "data_size": 63488 00:10:23.695 }, 00:10:23.695 { 00:10:23.695 "name": "pt2", 00:10:23.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.695 "is_configured": true, 00:10:23.695 "data_offset": 2048, 00:10:23.695 "data_size": 63488 00:10:23.695 }, 00:10:23.695 { 00:10:23.695 "name": "pt3", 00:10:23.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.696 "is_configured": true, 00:10:23.696 "data_offset": 2048, 00:10:23.696 "data_size": 63488 00:10:23.696 } 00:10:23.696 ] 00:10:23.696 }' 00:10:23.696 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.696 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.262 [2024-11-26 20:23:17.518208] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.262 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.262 "name": "raid_bdev1", 00:10:24.262 "aliases": [ 00:10:24.262 "3ab7862e-a445-4bea-b954-f09c23f5b626" 00:10:24.262 ], 00:10:24.262 "product_name": "Raid Volume", 00:10:24.262 "block_size": 512, 00:10:24.262 "num_blocks": 63488, 00:10:24.262 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:24.262 "assigned_rate_limits": { 00:10:24.262 "rw_ios_per_sec": 0, 00:10:24.262 "rw_mbytes_per_sec": 0, 00:10:24.262 "r_mbytes_per_sec": 0, 00:10:24.262 "w_mbytes_per_sec": 0 00:10:24.262 }, 00:10:24.262 "claimed": false, 00:10:24.262 "zoned": false, 00:10:24.262 "supported_io_types": { 00:10:24.262 "read": true, 00:10:24.262 "write": true, 00:10:24.262 "unmap": false, 00:10:24.262 "flush": false, 00:10:24.262 "reset": true, 00:10:24.262 "nvme_admin": false, 00:10:24.262 "nvme_io": false, 00:10:24.262 "nvme_io_md": false, 00:10:24.262 "write_zeroes": true, 00:10:24.262 "zcopy": false, 00:10:24.262 "get_zone_info": false, 00:10:24.262 "zone_management": false, 00:10:24.262 "zone_append": false, 00:10:24.262 "compare": false, 00:10:24.262 "compare_and_write": false, 00:10:24.262 "abort": false, 00:10:24.262 "seek_hole": false, 00:10:24.262 "seek_data": false, 00:10:24.262 "copy": false, 00:10:24.262 "nvme_iov_md": false 00:10:24.262 }, 00:10:24.262 "memory_domains": [ 00:10:24.262 { 00:10:24.262 "dma_device_id": "system", 00:10:24.262 "dma_device_type": 1 00:10:24.262 }, 00:10:24.262 { 00:10:24.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.262 "dma_device_type": 2 00:10:24.262 }, 00:10:24.262 { 00:10:24.262 "dma_device_id": "system", 00:10:24.262 "dma_device_type": 1 00:10:24.262 }, 00:10:24.262 { 00:10:24.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.262 "dma_device_type": 2 00:10:24.262 }, 00:10:24.262 { 00:10:24.262 "dma_device_id": "system", 00:10:24.262 "dma_device_type": 1 00:10:24.262 }, 00:10:24.262 { 00:10:24.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.262 "dma_device_type": 2 00:10:24.262 } 00:10:24.262 ], 00:10:24.262 "driver_specific": { 00:10:24.262 "raid": { 00:10:24.262 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:24.262 "strip_size_kb": 0, 00:10:24.262 "state": "online", 00:10:24.262 "raid_level": "raid1", 00:10:24.262 "superblock": true, 00:10:24.262 "num_base_bdevs": 3, 00:10:24.262 "num_base_bdevs_discovered": 3, 00:10:24.262 "num_base_bdevs_operational": 3, 00:10:24.262 "base_bdevs_list": [ 00:10:24.262 { 00:10:24.262 "name": "pt1", 00:10:24.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.262 "is_configured": true, 00:10:24.262 "data_offset": 2048, 00:10:24.262 "data_size": 63488 00:10:24.262 }, 00:10:24.262 { 00:10:24.262 "name": "pt2", 00:10:24.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.262 "is_configured": true, 00:10:24.262 "data_offset": 2048, 00:10:24.262 "data_size": 63488 00:10:24.262 }, 00:10:24.262 { 00:10:24.262 "name": "pt3", 00:10:24.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.263 "is_configured": true, 00:10:24.263 "data_offset": 2048, 00:10:24.263 "data_size": 63488 00:10:24.263 } 00:10:24.263 ] 00:10:24.263 } 00:10:24.263 } 00:10:24.263 }' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:24.263 pt2 00:10:24.263 pt3' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.263 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.526 [2024-11-26 20:23:17.825709] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3ab7862e-a445-4bea-b954-f09c23f5b626 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3ab7862e-a445-4bea-b954-f09c23f5b626 ']' 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.526 [2024-11-26 20:23:17.869287] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.526 [2024-11-26 20:23:17.869395] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.526 [2024-11-26 20:23:17.869510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.526 [2024-11-26 20:23:17.869599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.526 [2024-11-26 20:23:17.869626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.526 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.527 20:23:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.527 [2024-11-26 20:23:18.017067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:24.527 [2024-11-26 20:23:18.019394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:24.527 [2024-11-26 20:23:18.019459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:24.527 [2024-11-26 20:23:18.019524] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:24.527 [2024-11-26 20:23:18.019601] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:24.527 [2024-11-26 20:23:18.019641] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:24.527 [2024-11-26 20:23:18.019659] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.527 [2024-11-26 20:23:18.019671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:24.527 request: 00:10:24.527 { 00:10:24.527 "name": "raid_bdev1", 00:10:24.527 "raid_level": "raid1", 00:10:24.527 "base_bdevs": [ 00:10:24.527 "malloc1", 00:10:24.527 "malloc2", 00:10:24.527 "malloc3" 00:10:24.527 ], 00:10:24.527 "superblock": false, 00:10:24.527 "method": "bdev_raid_create", 00:10:24.527 "req_id": 1 00:10:24.527 } 00:10:24.527 Got JSON-RPC error response 00:10:24.527 response: 00:10:24.527 { 00:10:24.527 "code": -17, 00:10:24.527 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:24.527 } 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.527 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.794 [2024-11-26 20:23:18.076913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.794 [2024-11-26 20:23:18.077079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.794 [2024-11-26 20:23:18.077139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:24.794 [2024-11-26 20:23:18.077175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.794 [2024-11-26 20:23:18.079755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.794 [2024-11-26 20:23:18.079850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.794 [2024-11-26 20:23:18.080017] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:24.794 [2024-11-26 20:23:18.080104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.794 pt1 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.794 "name": "raid_bdev1", 00:10:24.794 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:24.794 "strip_size_kb": 0, 00:10:24.794 "state": "configuring", 00:10:24.794 "raid_level": "raid1", 00:10:24.794 "superblock": true, 00:10:24.794 "num_base_bdevs": 3, 00:10:24.794 "num_base_bdevs_discovered": 1, 00:10:24.794 "num_base_bdevs_operational": 3, 00:10:24.794 "base_bdevs_list": [ 00:10:24.794 { 00:10:24.794 "name": "pt1", 00:10:24.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.794 "is_configured": true, 00:10:24.794 "data_offset": 2048, 00:10:24.794 "data_size": 63488 00:10:24.794 }, 00:10:24.794 { 00:10:24.794 "name": null, 00:10:24.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.794 "is_configured": false, 00:10:24.794 "data_offset": 2048, 00:10:24.794 "data_size": 63488 00:10:24.794 }, 00:10:24.794 { 00:10:24.794 "name": null, 00:10:24.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.794 "is_configured": false, 00:10:24.794 "data_offset": 2048, 00:10:24.794 "data_size": 63488 00:10:24.794 } 00:10:24.794 ] 00:10:24.794 }' 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.794 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.052 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:25.052 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.052 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.052 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.311 [2024-11-26 20:23:18.604223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.311 [2024-11-26 20:23:18.604331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.311 [2024-11-26 20:23:18.604355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:25.311 [2024-11-26 20:23:18.604370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.311 [2024-11-26 20:23:18.604876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.311 [2024-11-26 20:23:18.604903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.311 [2024-11-26 20:23:18.604989] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.311 [2024-11-26 20:23:18.605017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.311 pt2 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.311 [2024-11-26 20:23:18.616247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.311 "name": "raid_bdev1", 00:10:25.311 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:25.311 "strip_size_kb": 0, 00:10:25.311 "state": "configuring", 00:10:25.311 "raid_level": "raid1", 00:10:25.311 "superblock": true, 00:10:25.311 "num_base_bdevs": 3, 00:10:25.311 "num_base_bdevs_discovered": 1, 00:10:25.311 "num_base_bdevs_operational": 3, 00:10:25.311 "base_bdevs_list": [ 00:10:25.311 { 00:10:25.311 "name": "pt1", 00:10:25.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.311 "is_configured": true, 00:10:25.311 "data_offset": 2048, 00:10:25.311 "data_size": 63488 00:10:25.311 }, 00:10:25.311 { 00:10:25.311 "name": null, 00:10:25.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.311 "is_configured": false, 00:10:25.311 "data_offset": 0, 00:10:25.311 "data_size": 63488 00:10:25.311 }, 00:10:25.311 { 00:10:25.311 "name": null, 00:10:25.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.311 "is_configured": false, 00:10:25.311 "data_offset": 2048, 00:10:25.311 "data_size": 63488 00:10:25.311 } 00:10:25.311 ] 00:10:25.311 }' 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.311 20:23:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.570 [2024-11-26 20:23:19.063577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.570 [2024-11-26 20:23:19.063745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.570 [2024-11-26 20:23:19.063795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:25.570 [2024-11-26 20:23:19.063833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.570 [2024-11-26 20:23:19.064329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.570 [2024-11-26 20:23:19.064411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.570 [2024-11-26 20:23:19.064556] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.570 [2024-11-26 20:23:19.064639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.570 pt2 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.570 [2024-11-26 20:23:19.071522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:25.570 [2024-11-26 20:23:19.071640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.570 [2024-11-26 20:23:19.071694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.570 [2024-11-26 20:23:19.071728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.570 [2024-11-26 20:23:19.072203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.570 [2024-11-26 20:23:19.072271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:25.570 [2024-11-26 20:23:19.072393] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:25.570 [2024-11-26 20:23:19.072461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:25.570 [2024-11-26 20:23:19.072595] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:25.570 [2024-11-26 20:23:19.072605] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:25.570 [2024-11-26 20:23:19.072903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:25.570 [2024-11-26 20:23:19.073049] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:25.570 [2024-11-26 20:23:19.073064] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:25.570 [2024-11-26 20:23:19.073186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.570 pt3 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.570 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.837 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.837 "name": "raid_bdev1", 00:10:25.837 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:25.837 "strip_size_kb": 0, 00:10:25.837 "state": "online", 00:10:25.837 "raid_level": "raid1", 00:10:25.837 "superblock": true, 00:10:25.837 "num_base_bdevs": 3, 00:10:25.837 "num_base_bdevs_discovered": 3, 00:10:25.837 "num_base_bdevs_operational": 3, 00:10:25.837 "base_bdevs_list": [ 00:10:25.837 { 00:10:25.837 "name": "pt1", 00:10:25.837 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.837 "is_configured": true, 00:10:25.837 "data_offset": 2048, 00:10:25.837 "data_size": 63488 00:10:25.837 }, 00:10:25.837 { 00:10:25.837 "name": "pt2", 00:10:25.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.837 "is_configured": true, 00:10:25.837 "data_offset": 2048, 00:10:25.837 "data_size": 63488 00:10:25.838 }, 00:10:25.838 { 00:10:25.838 "name": "pt3", 00:10:25.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.838 "is_configured": true, 00:10:25.838 "data_offset": 2048, 00:10:25.838 "data_size": 63488 00:10:25.838 } 00:10:25.838 ] 00:10:25.838 }' 00:10:25.838 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.838 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.098 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.098 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.098 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.098 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.098 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.098 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.099 [2024-11-26 20:23:19.507158] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.099 "name": "raid_bdev1", 00:10:26.099 "aliases": [ 00:10:26.099 "3ab7862e-a445-4bea-b954-f09c23f5b626" 00:10:26.099 ], 00:10:26.099 "product_name": "Raid Volume", 00:10:26.099 "block_size": 512, 00:10:26.099 "num_blocks": 63488, 00:10:26.099 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:26.099 "assigned_rate_limits": { 00:10:26.099 "rw_ios_per_sec": 0, 00:10:26.099 "rw_mbytes_per_sec": 0, 00:10:26.099 "r_mbytes_per_sec": 0, 00:10:26.099 "w_mbytes_per_sec": 0 00:10:26.099 }, 00:10:26.099 "claimed": false, 00:10:26.099 "zoned": false, 00:10:26.099 "supported_io_types": { 00:10:26.099 "read": true, 00:10:26.099 "write": true, 00:10:26.099 "unmap": false, 00:10:26.099 "flush": false, 00:10:26.099 "reset": true, 00:10:26.099 "nvme_admin": false, 00:10:26.099 "nvme_io": false, 00:10:26.099 "nvme_io_md": false, 00:10:26.099 "write_zeroes": true, 00:10:26.099 "zcopy": false, 00:10:26.099 "get_zone_info": false, 00:10:26.099 "zone_management": false, 00:10:26.099 "zone_append": false, 00:10:26.099 "compare": false, 00:10:26.099 "compare_and_write": false, 00:10:26.099 "abort": false, 00:10:26.099 "seek_hole": false, 00:10:26.099 "seek_data": false, 00:10:26.099 "copy": false, 00:10:26.099 "nvme_iov_md": false 00:10:26.099 }, 00:10:26.099 "memory_domains": [ 00:10:26.099 { 00:10:26.099 "dma_device_id": "system", 00:10:26.099 "dma_device_type": 1 00:10:26.099 }, 00:10:26.099 { 00:10:26.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.099 "dma_device_type": 2 00:10:26.099 }, 00:10:26.099 { 00:10:26.099 "dma_device_id": "system", 00:10:26.099 "dma_device_type": 1 00:10:26.099 }, 00:10:26.099 { 00:10:26.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.099 "dma_device_type": 2 00:10:26.099 }, 00:10:26.099 { 00:10:26.099 "dma_device_id": "system", 00:10:26.099 "dma_device_type": 1 00:10:26.099 }, 00:10:26.099 { 00:10:26.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.099 "dma_device_type": 2 00:10:26.099 } 00:10:26.099 ], 00:10:26.099 "driver_specific": { 00:10:26.099 "raid": { 00:10:26.099 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:26.099 "strip_size_kb": 0, 00:10:26.099 "state": "online", 00:10:26.099 "raid_level": "raid1", 00:10:26.099 "superblock": true, 00:10:26.099 "num_base_bdevs": 3, 00:10:26.099 "num_base_bdevs_discovered": 3, 00:10:26.099 "num_base_bdevs_operational": 3, 00:10:26.099 "base_bdevs_list": [ 00:10:26.099 { 00:10:26.099 "name": "pt1", 00:10:26.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.099 "is_configured": true, 00:10:26.099 "data_offset": 2048, 00:10:26.099 "data_size": 63488 00:10:26.099 }, 00:10:26.099 { 00:10:26.099 "name": "pt2", 00:10:26.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.099 "is_configured": true, 00:10:26.099 "data_offset": 2048, 00:10:26.099 "data_size": 63488 00:10:26.099 }, 00:10:26.099 { 00:10:26.099 "name": "pt3", 00:10:26.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.099 "is_configured": true, 00:10:26.099 "data_offset": 2048, 00:10:26.099 "data_size": 63488 00:10:26.099 } 00:10:26.099 ] 00:10:26.099 } 00:10:26.099 } 00:10:26.099 }' 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:26.099 pt2 00:10:26.099 pt3' 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.099 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.359 [2024-11-26 20:23:19.778728] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3ab7862e-a445-4bea-b954-f09c23f5b626 '!=' 3ab7862e-a445-4bea-b954-f09c23f5b626 ']' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.359 [2024-11-26 20:23:19.822346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.359 "name": "raid_bdev1", 00:10:26.359 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:26.359 "strip_size_kb": 0, 00:10:26.359 "state": "online", 00:10:26.359 "raid_level": "raid1", 00:10:26.359 "superblock": true, 00:10:26.359 "num_base_bdevs": 3, 00:10:26.359 "num_base_bdevs_discovered": 2, 00:10:26.359 "num_base_bdevs_operational": 2, 00:10:26.359 "base_bdevs_list": [ 00:10:26.359 { 00:10:26.359 "name": null, 00:10:26.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.359 "is_configured": false, 00:10:26.359 "data_offset": 0, 00:10:26.359 "data_size": 63488 00:10:26.359 }, 00:10:26.359 { 00:10:26.359 "name": "pt2", 00:10:26.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.359 "is_configured": true, 00:10:26.359 "data_offset": 2048, 00:10:26.359 "data_size": 63488 00:10:26.359 }, 00:10:26.359 { 00:10:26.359 "name": "pt3", 00:10:26.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.359 "is_configured": true, 00:10:26.359 "data_offset": 2048, 00:10:26.359 "data_size": 63488 00:10:26.359 } 00:10:26.359 ] 00:10:26.359 }' 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.359 20:23:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.926 [2024-11-26 20:23:20.289513] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.926 [2024-11-26 20:23:20.289556] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.926 [2024-11-26 20:23:20.289659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.926 [2024-11-26 20:23:20.289731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.926 [2024-11-26 20:23:20.289742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.926 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.926 [2024-11-26 20:23:20.377374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.926 [2024-11-26 20:23:20.377522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.926 [2024-11-26 20:23:20.377571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:26.926 [2024-11-26 20:23:20.377626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.926 [2024-11-26 20:23:20.380242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.926 [2024-11-26 20:23:20.380333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.926 [2024-11-26 20:23:20.380477] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.926 [2024-11-26 20:23:20.380627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.926 pt2 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.927 "name": "raid_bdev1", 00:10:26.927 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:26.927 "strip_size_kb": 0, 00:10:26.927 "state": "configuring", 00:10:26.927 "raid_level": "raid1", 00:10:26.927 "superblock": true, 00:10:26.927 "num_base_bdevs": 3, 00:10:26.927 "num_base_bdevs_discovered": 1, 00:10:26.927 "num_base_bdevs_operational": 2, 00:10:26.927 "base_bdevs_list": [ 00:10:26.927 { 00:10:26.927 "name": null, 00:10:26.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.927 "is_configured": false, 00:10:26.927 "data_offset": 2048, 00:10:26.927 "data_size": 63488 00:10:26.927 }, 00:10:26.927 { 00:10:26.927 "name": "pt2", 00:10:26.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.927 "is_configured": true, 00:10:26.927 "data_offset": 2048, 00:10:26.927 "data_size": 63488 00:10:26.927 }, 00:10:26.927 { 00:10:26.927 "name": null, 00:10:26.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.927 "is_configured": false, 00:10:26.927 "data_offset": 2048, 00:10:26.927 "data_size": 63488 00:10:26.927 } 00:10:26.927 ] 00:10:26.927 }' 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.927 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.495 [2024-11-26 20:23:20.824665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:27.495 [2024-11-26 20:23:20.824748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.495 [2024-11-26 20:23:20.824774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:27.495 [2024-11-26 20:23:20.824784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.495 [2024-11-26 20:23:20.825242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.495 [2024-11-26 20:23:20.825262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:27.495 [2024-11-26 20:23:20.825350] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:27.495 [2024-11-26 20:23:20.825376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:27.495 [2024-11-26 20:23:20.825478] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:27.495 [2024-11-26 20:23:20.825488] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.495 [2024-11-26 20:23:20.825792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:27.495 [2024-11-26 20:23:20.825934] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:27.495 [2024-11-26 20:23:20.825947] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:27.495 [2024-11-26 20:23:20.826066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.495 pt3 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.495 "name": "raid_bdev1", 00:10:27.495 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:27.495 "strip_size_kb": 0, 00:10:27.495 "state": "online", 00:10:27.495 "raid_level": "raid1", 00:10:27.495 "superblock": true, 00:10:27.495 "num_base_bdevs": 3, 00:10:27.495 "num_base_bdevs_discovered": 2, 00:10:27.495 "num_base_bdevs_operational": 2, 00:10:27.495 "base_bdevs_list": [ 00:10:27.495 { 00:10:27.495 "name": null, 00:10:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.495 "is_configured": false, 00:10:27.495 "data_offset": 2048, 00:10:27.495 "data_size": 63488 00:10:27.495 }, 00:10:27.495 { 00:10:27.495 "name": "pt2", 00:10:27.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.495 "is_configured": true, 00:10:27.495 "data_offset": 2048, 00:10:27.495 "data_size": 63488 00:10:27.495 }, 00:10:27.495 { 00:10:27.495 "name": "pt3", 00:10:27.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.495 "is_configured": true, 00:10:27.495 "data_offset": 2048, 00:10:27.495 "data_size": 63488 00:10:27.495 } 00:10:27.495 ] 00:10:27.495 }' 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.495 20:23:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 [2024-11-26 20:23:21.308013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.060 [2024-11-26 20:23:21.308120] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.060 [2024-11-26 20:23:21.308243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.060 [2024-11-26 20:23:21.308344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.060 [2024-11-26 20:23:21.308399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 [2024-11-26 20:23:21.387882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:28.060 [2024-11-26 20:23:21.388049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.060 [2024-11-26 20:23:21.388099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:28.060 [2024-11-26 20:23:21.388140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.060 [2024-11-26 20:23:21.390781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.060 [2024-11-26 20:23:21.390880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:28.060 [2024-11-26 20:23:21.391013] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:28.060 [2024-11-26 20:23:21.391099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:28.060 [2024-11-26 20:23:21.391228] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:28.060 [2024-11-26 20:23:21.391247] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.060 [2024-11-26 20:23:21.391264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:28.060 [2024-11-26 20:23:21.391301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.060 pt1 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.060 "name": "raid_bdev1", 00:10:28.060 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:28.060 "strip_size_kb": 0, 00:10:28.060 "state": "configuring", 00:10:28.060 "raid_level": "raid1", 00:10:28.060 "superblock": true, 00:10:28.060 "num_base_bdevs": 3, 00:10:28.060 "num_base_bdevs_discovered": 1, 00:10:28.060 "num_base_bdevs_operational": 2, 00:10:28.060 "base_bdevs_list": [ 00:10:28.060 { 00:10:28.060 "name": null, 00:10:28.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.060 "is_configured": false, 00:10:28.060 "data_offset": 2048, 00:10:28.060 "data_size": 63488 00:10:28.060 }, 00:10:28.060 { 00:10:28.060 "name": "pt2", 00:10:28.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.060 "is_configured": true, 00:10:28.060 "data_offset": 2048, 00:10:28.060 "data_size": 63488 00:10:28.060 }, 00:10:28.060 { 00:10:28.060 "name": null, 00:10:28.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.060 "is_configured": false, 00:10:28.060 "data_offset": 2048, 00:10:28.060 "data_size": 63488 00:10:28.060 } 00:10:28.060 ] 00:10:28.060 }' 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.060 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.318 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.318 [2024-11-26 20:23:21.835172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:28.318 [2024-11-26 20:23:21.835335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.318 [2024-11-26 20:23:21.835378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:28.318 [2024-11-26 20:23:21.835421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.318 [2024-11-26 20:23:21.835934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.318 [2024-11-26 20:23:21.836016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:28.318 [2024-11-26 20:23:21.836146] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:28.318 [2024-11-26 20:23:21.836235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.318 [2024-11-26 20:23:21.836394] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:28.318 [2024-11-26 20:23:21.836454] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.319 [2024-11-26 20:23:21.836758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.319 [2024-11-26 20:23:21.836963] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:28.319 [2024-11-26 20:23:21.837010] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:28.319 [2024-11-26 20:23:21.837180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.319 pt3 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.319 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.578 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.578 "name": "raid_bdev1", 00:10:28.578 "uuid": "3ab7862e-a445-4bea-b954-f09c23f5b626", 00:10:28.578 "strip_size_kb": 0, 00:10:28.578 "state": "online", 00:10:28.578 "raid_level": "raid1", 00:10:28.578 "superblock": true, 00:10:28.578 "num_base_bdevs": 3, 00:10:28.578 "num_base_bdevs_discovered": 2, 00:10:28.578 "num_base_bdevs_operational": 2, 00:10:28.578 "base_bdevs_list": [ 00:10:28.578 { 00:10:28.578 "name": null, 00:10:28.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.578 "is_configured": false, 00:10:28.578 "data_offset": 2048, 00:10:28.578 "data_size": 63488 00:10:28.578 }, 00:10:28.578 { 00:10:28.578 "name": "pt2", 00:10:28.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.578 "is_configured": true, 00:10:28.578 "data_offset": 2048, 00:10:28.578 "data_size": 63488 00:10:28.578 }, 00:10:28.578 { 00:10:28.578 "name": "pt3", 00:10:28.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.578 "is_configured": true, 00:10:28.578 "data_offset": 2048, 00:10:28.578 "data_size": 63488 00:10:28.578 } 00:10:28.578 ] 00:10:28.578 }' 00:10:28.578 20:23:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.578 20:23:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.912 [2024-11-26 20:23:22.370627] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3ab7862e-a445-4bea-b954-f09c23f5b626 '!=' 3ab7862e-a445-4bea-b954-f09c23f5b626 ']' 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80104 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 80104 ']' 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 80104 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80104 00:10:28.912 killing process with pid 80104 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80104' 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 80104 00:10:28.912 [2024-11-26 20:23:22.440520] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.912 [2024-11-26 20:23:22.440642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.912 [2024-11-26 20:23:22.440720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.912 [2024-11-26 20:23:22.440731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:28.912 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 80104 00:10:29.170 [2024-11-26 20:23:22.500876] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.429 20:23:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:29.429 00:10:29.429 real 0m6.955s 00:10:29.429 user 0m11.536s 00:10:29.429 sys 0m1.447s 00:10:29.429 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.429 20:23:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.429 ************************************ 00:10:29.429 END TEST raid_superblock_test 00:10:29.429 ************************************ 00:10:29.429 20:23:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:29.429 20:23:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:29.429 20:23:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.429 20:23:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.429 ************************************ 00:10:29.429 START TEST raid_read_error_test 00:10:29.429 ************************************ 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eRM12K2Qfc 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80539 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80539 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80539 ']' 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.429 20:23:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.688 [2024-11-26 20:23:23.050470] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:29.688 [2024-11-26 20:23:23.050670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80539 ] 00:10:29.688 [2024-11-26 20:23:23.231094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.946 [2024-11-26 20:23:23.316711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.946 [2024-11-26 20:23:23.397684] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.946 [2024-11-26 20:23:23.397736] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.515 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.515 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:30.515 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.515 20:23:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.515 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.515 20:23:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.515 BaseBdev1_malloc 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.515 true 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.515 [2024-11-26 20:23:24.026707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.515 [2024-11-26 20:23:24.026783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.515 [2024-11-26 20:23:24.026826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.515 [2024-11-26 20:23:24.026847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.515 [2024-11-26 20:23:24.029500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.515 [2024-11-26 20:23:24.029558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.515 BaseBdev1 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.515 BaseBdev2_malloc 00:10:30.515 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.774 true 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.774 [2024-11-26 20:23:24.083493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.774 [2024-11-26 20:23:24.083572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.774 [2024-11-26 20:23:24.083599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.774 [2024-11-26 20:23:24.083609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.774 [2024-11-26 20:23:24.086249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.774 [2024-11-26 20:23:24.086297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.774 BaseBdev2 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.774 BaseBdev3_malloc 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.774 true 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.774 [2024-11-26 20:23:24.126954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:30.774 [2024-11-26 20:23:24.127020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.774 [2024-11-26 20:23:24.127047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:30.774 [2024-11-26 20:23:24.127058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.774 [2024-11-26 20:23:24.129704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.774 [2024-11-26 20:23:24.129752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:30.774 BaseBdev3 00:10:30.774 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.775 [2024-11-26 20:23:24.139063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.775 [2024-11-26 20:23:24.141821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.775 [2024-11-26 20:23:24.141940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.775 [2024-11-26 20:23:24.142169] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:30.775 [2024-11-26 20:23:24.142199] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:30.775 [2024-11-26 20:23:24.142533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:30.775 [2024-11-26 20:23:24.142790] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:30.775 [2024-11-26 20:23:24.142819] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:30.775 [2024-11-26 20:23:24.143066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.775 "name": "raid_bdev1", 00:10:30.775 "uuid": "6a007a6b-16a0-4e57-a687-01ada0efce48", 00:10:30.775 "strip_size_kb": 0, 00:10:30.775 "state": "online", 00:10:30.775 "raid_level": "raid1", 00:10:30.775 "superblock": true, 00:10:30.775 "num_base_bdevs": 3, 00:10:30.775 "num_base_bdevs_discovered": 3, 00:10:30.775 "num_base_bdevs_operational": 3, 00:10:30.775 "base_bdevs_list": [ 00:10:30.775 { 00:10:30.775 "name": "BaseBdev1", 00:10:30.775 "uuid": "08916a66-61e6-5032-b331-033eea82bc51", 00:10:30.775 "is_configured": true, 00:10:30.775 "data_offset": 2048, 00:10:30.775 "data_size": 63488 00:10:30.775 }, 00:10:30.775 { 00:10:30.775 "name": "BaseBdev2", 00:10:30.775 "uuid": "42486d0b-cbfe-51ea-b8fe-ffb8255785b1", 00:10:30.775 "is_configured": true, 00:10:30.775 "data_offset": 2048, 00:10:30.775 "data_size": 63488 00:10:30.775 }, 00:10:30.775 { 00:10:30.775 "name": "BaseBdev3", 00:10:30.775 "uuid": "27d02098-854f-5bbd-8616-683ac385032f", 00:10:30.775 "is_configured": true, 00:10:30.775 "data_offset": 2048, 00:10:30.775 "data_size": 63488 00:10:30.775 } 00:10:30.775 ] 00:10:30.775 }' 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.775 20:23:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.344 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.344 20:23:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.344 [2024-11-26 20:23:24.706662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.280 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.281 "name": "raid_bdev1", 00:10:32.281 "uuid": "6a007a6b-16a0-4e57-a687-01ada0efce48", 00:10:32.281 "strip_size_kb": 0, 00:10:32.281 "state": "online", 00:10:32.281 "raid_level": "raid1", 00:10:32.281 "superblock": true, 00:10:32.281 "num_base_bdevs": 3, 00:10:32.281 "num_base_bdevs_discovered": 3, 00:10:32.281 "num_base_bdevs_operational": 3, 00:10:32.281 "base_bdevs_list": [ 00:10:32.281 { 00:10:32.281 "name": "BaseBdev1", 00:10:32.281 "uuid": "08916a66-61e6-5032-b331-033eea82bc51", 00:10:32.281 "is_configured": true, 00:10:32.281 "data_offset": 2048, 00:10:32.281 "data_size": 63488 00:10:32.281 }, 00:10:32.281 { 00:10:32.281 "name": "BaseBdev2", 00:10:32.281 "uuid": "42486d0b-cbfe-51ea-b8fe-ffb8255785b1", 00:10:32.281 "is_configured": true, 00:10:32.281 "data_offset": 2048, 00:10:32.281 "data_size": 63488 00:10:32.281 }, 00:10:32.281 { 00:10:32.281 "name": "BaseBdev3", 00:10:32.281 "uuid": "27d02098-854f-5bbd-8616-683ac385032f", 00:10:32.281 "is_configured": true, 00:10:32.281 "data_offset": 2048, 00:10:32.281 "data_size": 63488 00:10:32.281 } 00:10:32.281 ] 00:10:32.281 }' 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.281 20:23:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.540 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.540 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.540 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.883 [2024-11-26 20:23:26.091114] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.883 [2024-11-26 20:23:26.091162] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.883 [2024-11-26 20:23:26.094307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.883 [2024-11-26 20:23:26.094380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.883 [2024-11-26 20:23:26.094496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.883 [2024-11-26 20:23:26.094516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:32.883 { 00:10:32.883 "results": [ 00:10:32.883 { 00:10:32.883 "job": "raid_bdev1", 00:10:32.883 "core_mask": "0x1", 00:10:32.883 "workload": "randrw", 00:10:32.883 "percentage": 50, 00:10:32.883 "status": "finished", 00:10:32.883 "queue_depth": 1, 00:10:32.883 "io_size": 131072, 00:10:32.883 "runtime": 1.384851, 00:10:32.883 "iops": 9203.878251162037, 00:10:32.883 "mibps": 1150.4847813952547, 00:10:32.883 "io_failed": 0, 00:10:32.883 "io_timeout": 0, 00:10:32.883 "avg_latency_us": 105.41262654882051, 00:10:32.883 "min_latency_us": 24.929257641921396, 00:10:32.883 "max_latency_us": 1788.646288209607 00:10:32.883 } 00:10:32.883 ], 00:10:32.883 "core_count": 1 00:10:32.883 } 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80539 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80539 ']' 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80539 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80539 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80539' 00:10:32.883 killing process with pid 80539 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80539 00:10:32.883 [2024-11-26 20:23:26.144158] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.883 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80539 00:10:32.883 [2024-11-26 20:23:26.194247] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eRM12K2Qfc 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:33.143 00:10:33.143 real 0m3.640s 00:10:33.143 user 0m4.552s 00:10:33.143 sys 0m0.678s 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.143 20:23:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.143 ************************************ 00:10:33.143 END TEST raid_read_error_test 00:10:33.143 ************************************ 00:10:33.143 20:23:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:33.143 20:23:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:33.143 20:23:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.143 20:23:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.143 ************************************ 00:10:33.143 START TEST raid_write_error_test 00:10:33.143 ************************************ 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.143 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ptyItZxMvP 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80674 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80674 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80674 ']' 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.144 20:23:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.402 [2024-11-26 20:23:26.758476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:33.402 [2024-11-26 20:23:26.758634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80674 ] 00:10:33.402 [2024-11-26 20:23:26.922528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.661 [2024-11-26 20:23:27.012283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.662 [2024-11-26 20:23:27.090473] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.662 [2024-11-26 20:23:27.090523] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.230 BaseBdev1_malloc 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.230 true 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.230 [2024-11-26 20:23:27.718375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.230 [2024-11-26 20:23:27.718548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.230 [2024-11-26 20:23:27.718582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:34.230 [2024-11-26 20:23:27.718604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.230 [2024-11-26 20:23:27.721370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.230 [2024-11-26 20:23:27.721429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.230 BaseBdev1 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.230 BaseBdev2_malloc 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.230 true 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.230 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.230 [2024-11-26 20:23:27.776565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.230 [2024-11-26 20:23:27.776759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.230 [2024-11-26 20:23:27.776809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:34.230 [2024-11-26 20:23:27.776881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.230 [2024-11-26 20:23:27.779438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.230 [2024-11-26 20:23:27.779540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.489 BaseBdev2 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.489 BaseBdev3_malloc 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.489 true 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.489 [2024-11-26 20:23:27.824571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:34.489 [2024-11-26 20:23:27.824688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.489 [2024-11-26 20:23:27.824722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:34.489 [2024-11-26 20:23:27.824734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.489 [2024-11-26 20:23:27.827383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.489 [2024-11-26 20:23:27.827440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:34.489 BaseBdev3 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.489 [2024-11-26 20:23:27.836651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.489 [2024-11-26 20:23:27.838969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.489 [2024-11-26 20:23:27.839077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.489 [2024-11-26 20:23:27.839305] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:34.489 [2024-11-26 20:23:27.839326] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.489 [2024-11-26 20:23:27.839656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:34.489 [2024-11-26 20:23:27.839842] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:34.489 [2024-11-26 20:23:27.839856] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:34.489 [2024-11-26 20:23:27.840031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.489 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.490 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.490 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.490 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.490 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.490 "name": "raid_bdev1", 00:10:34.490 "uuid": "d50512b6-71ae-4aff-ae30-9d3d1b2c1bee", 00:10:34.490 "strip_size_kb": 0, 00:10:34.490 "state": "online", 00:10:34.490 "raid_level": "raid1", 00:10:34.490 "superblock": true, 00:10:34.490 "num_base_bdevs": 3, 00:10:34.490 "num_base_bdevs_discovered": 3, 00:10:34.490 "num_base_bdevs_operational": 3, 00:10:34.490 "base_bdevs_list": [ 00:10:34.490 { 00:10:34.490 "name": "BaseBdev1", 00:10:34.490 "uuid": "35646d22-d3ff-5d3d-8f68-542c10f7472c", 00:10:34.490 "is_configured": true, 00:10:34.490 "data_offset": 2048, 00:10:34.490 "data_size": 63488 00:10:34.490 }, 00:10:34.490 { 00:10:34.490 "name": "BaseBdev2", 00:10:34.490 "uuid": "901b711e-a6c6-5eb6-8d24-3e6b8fce7f9b", 00:10:34.490 "is_configured": true, 00:10:34.490 "data_offset": 2048, 00:10:34.490 "data_size": 63488 00:10:34.490 }, 00:10:34.490 { 00:10:34.490 "name": "BaseBdev3", 00:10:34.490 "uuid": "ec5f837f-6d41-577f-bf96-52acd2ae899b", 00:10:34.490 "is_configured": true, 00:10:34.490 "data_offset": 2048, 00:10:34.490 "data_size": 63488 00:10:34.490 } 00:10:34.490 ] 00:10:34.490 }' 00:10:34.490 20:23:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.490 20:23:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.749 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:34.749 20:23:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:35.009 [2024-11-26 20:23:28.364247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.954 [2024-11-26 20:23:29.250683] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:35.954 [2024-11-26 20:23:29.250879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.954 [2024-11-26 20:23:29.251159] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.954 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.955 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.955 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.955 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.955 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.955 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.955 "name": "raid_bdev1", 00:10:35.955 "uuid": "d50512b6-71ae-4aff-ae30-9d3d1b2c1bee", 00:10:35.955 "strip_size_kb": 0, 00:10:35.955 "state": "online", 00:10:35.955 "raid_level": "raid1", 00:10:35.955 "superblock": true, 00:10:35.955 "num_base_bdevs": 3, 00:10:35.955 "num_base_bdevs_discovered": 2, 00:10:35.955 "num_base_bdevs_operational": 2, 00:10:35.955 "base_bdevs_list": [ 00:10:35.955 { 00:10:35.955 "name": null, 00:10:35.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.955 "is_configured": false, 00:10:35.955 "data_offset": 0, 00:10:35.955 "data_size": 63488 00:10:35.955 }, 00:10:35.955 { 00:10:35.955 "name": "BaseBdev2", 00:10:35.955 "uuid": "901b711e-a6c6-5eb6-8d24-3e6b8fce7f9b", 00:10:35.955 "is_configured": true, 00:10:35.955 "data_offset": 2048, 00:10:35.955 "data_size": 63488 00:10:35.955 }, 00:10:35.955 { 00:10:35.955 "name": "BaseBdev3", 00:10:35.955 "uuid": "ec5f837f-6d41-577f-bf96-52acd2ae899b", 00:10:35.955 "is_configured": true, 00:10:35.955 "data_offset": 2048, 00:10:35.955 "data_size": 63488 00:10:35.955 } 00:10:35.955 ] 00:10:35.955 }' 00:10:35.955 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.955 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.225 [2024-11-26 20:23:29.755395] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.225 [2024-11-26 20:23:29.755434] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.225 [2024-11-26 20:23:29.758641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.225 [2024-11-26 20:23:29.758759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.225 [2024-11-26 20:23:29.758891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.225 [2024-11-26 20:23:29.758946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:36.225 { 00:10:36.225 "results": [ 00:10:36.225 { 00:10:36.225 "job": "raid_bdev1", 00:10:36.225 "core_mask": "0x1", 00:10:36.225 "workload": "randrw", 00:10:36.225 "percentage": 50, 00:10:36.225 "status": "finished", 00:10:36.225 "queue_depth": 1, 00:10:36.225 "io_size": 131072, 00:10:36.225 "runtime": 1.391471, 00:10:36.225 "iops": 10274.73802903546, 00:10:36.225 "mibps": 1284.3422536294324, 00:10:36.225 "io_failed": 0, 00:10:36.225 "io_timeout": 0, 00:10:36.225 "avg_latency_us": 94.10895216359862, 00:10:36.225 "min_latency_us": 28.50655021834061, 00:10:36.225 "max_latency_us": 1831.5737991266376 00:10:36.225 } 00:10:36.225 ], 00:10:36.225 "core_count": 1 00:10:36.225 } 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80674 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80674 ']' 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80674 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:36.225 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.485 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80674 00:10:36.485 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:36.485 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:36.485 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80674' 00:10:36.485 killing process with pid 80674 00:10:36.485 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80674 00:10:36.485 [2024-11-26 20:23:29.803040] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.485 20:23:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80674 00:10:36.485 [2024-11-26 20:23:29.846237] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ptyItZxMvP 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:36.745 00:10:36.745 real 0m3.593s 00:10:36.745 user 0m4.488s 00:10:36.745 sys 0m0.649s 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.745 20:23:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.745 ************************************ 00:10:36.745 END TEST raid_write_error_test 00:10:36.745 ************************************ 00:10:37.004 20:23:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:37.004 20:23:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:37.004 20:23:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:37.004 20:23:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:37.004 20:23:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.004 20:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.004 ************************************ 00:10:37.004 START TEST raid_state_function_test 00:10:37.004 ************************************ 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80812 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80812' 00:10:37.004 Process raid pid: 80812 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80812 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80812 ']' 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.004 20:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.005 20:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.005 [2024-11-26 20:23:30.425789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:37.005 [2024-11-26 20:23:30.426106] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.263 [2024-11-26 20:23:30.596538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.263 [2024-11-26 20:23:30.685995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.263 [2024-11-26 20:23:30.773327] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.263 [2024-11-26 20:23:30.773500] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.831 [2024-11-26 20:23:31.365233] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.831 [2024-11-26 20:23:31.365384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.831 [2024-11-26 20:23:31.365436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.831 [2024-11-26 20:23:31.365473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.831 [2024-11-26 20:23:31.365512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.831 [2024-11-26 20:23:31.365565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.831 [2024-11-26 20:23:31.365604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.831 [2024-11-26 20:23:31.365671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.831 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.089 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.089 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.089 "name": "Existed_Raid", 00:10:38.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.089 "strip_size_kb": 64, 00:10:38.089 "state": "configuring", 00:10:38.089 "raid_level": "raid0", 00:10:38.089 "superblock": false, 00:10:38.089 "num_base_bdevs": 4, 00:10:38.090 "num_base_bdevs_discovered": 0, 00:10:38.090 "num_base_bdevs_operational": 4, 00:10:38.090 "base_bdevs_list": [ 00:10:38.090 { 00:10:38.090 "name": "BaseBdev1", 00:10:38.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.090 "is_configured": false, 00:10:38.090 "data_offset": 0, 00:10:38.090 "data_size": 0 00:10:38.090 }, 00:10:38.090 { 00:10:38.090 "name": "BaseBdev2", 00:10:38.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.090 "is_configured": false, 00:10:38.090 "data_offset": 0, 00:10:38.090 "data_size": 0 00:10:38.090 }, 00:10:38.090 { 00:10:38.090 "name": "BaseBdev3", 00:10:38.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.090 "is_configured": false, 00:10:38.090 "data_offset": 0, 00:10:38.090 "data_size": 0 00:10:38.090 }, 00:10:38.090 { 00:10:38.090 "name": "BaseBdev4", 00:10:38.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.090 "is_configured": false, 00:10:38.090 "data_offset": 0, 00:10:38.090 "data_size": 0 00:10:38.090 } 00:10:38.090 ] 00:10:38.090 }' 00:10:38.090 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.090 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.349 [2024-11-26 20:23:31.812669] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.349 [2024-11-26 20:23:31.812846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.349 [2024-11-26 20:23:31.824696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.349 [2024-11-26 20:23:31.824824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.349 [2024-11-26 20:23:31.824867] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.349 [2024-11-26 20:23:31.824903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.349 [2024-11-26 20:23:31.824982] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.349 [2024-11-26 20:23:31.825018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.349 [2024-11-26 20:23:31.825082] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.349 [2024-11-26 20:23:31.825129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.349 [2024-11-26 20:23:31.852207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.349 BaseBdev1 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.349 [ 00:10:38.349 { 00:10:38.349 "name": "BaseBdev1", 00:10:38.349 "aliases": [ 00:10:38.349 "00356f3b-fee8-422d-9964-717818f58241" 00:10:38.349 ], 00:10:38.349 "product_name": "Malloc disk", 00:10:38.349 "block_size": 512, 00:10:38.349 "num_blocks": 65536, 00:10:38.349 "uuid": "00356f3b-fee8-422d-9964-717818f58241", 00:10:38.349 "assigned_rate_limits": { 00:10:38.349 "rw_ios_per_sec": 0, 00:10:38.349 "rw_mbytes_per_sec": 0, 00:10:38.349 "r_mbytes_per_sec": 0, 00:10:38.349 "w_mbytes_per_sec": 0 00:10:38.349 }, 00:10:38.349 "claimed": true, 00:10:38.349 "claim_type": "exclusive_write", 00:10:38.349 "zoned": false, 00:10:38.349 "supported_io_types": { 00:10:38.349 "read": true, 00:10:38.349 "write": true, 00:10:38.349 "unmap": true, 00:10:38.349 "flush": true, 00:10:38.349 "reset": true, 00:10:38.349 "nvme_admin": false, 00:10:38.349 "nvme_io": false, 00:10:38.349 "nvme_io_md": false, 00:10:38.349 "write_zeroes": true, 00:10:38.349 "zcopy": true, 00:10:38.349 "get_zone_info": false, 00:10:38.349 "zone_management": false, 00:10:38.349 "zone_append": false, 00:10:38.349 "compare": false, 00:10:38.349 "compare_and_write": false, 00:10:38.349 "abort": true, 00:10:38.349 "seek_hole": false, 00:10:38.349 "seek_data": false, 00:10:38.349 "copy": true, 00:10:38.349 "nvme_iov_md": false 00:10:38.349 }, 00:10:38.349 "memory_domains": [ 00:10:38.349 { 00:10:38.349 "dma_device_id": "system", 00:10:38.349 "dma_device_type": 1 00:10:38.349 }, 00:10:38.349 { 00:10:38.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.349 "dma_device_type": 2 00:10:38.349 } 00:10:38.349 ], 00:10:38.349 "driver_specific": {} 00:10:38.349 } 00:10:38.349 ] 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.349 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.609 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.609 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.609 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.609 "name": "Existed_Raid", 00:10:38.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.609 "strip_size_kb": 64, 00:10:38.609 "state": "configuring", 00:10:38.609 "raid_level": "raid0", 00:10:38.609 "superblock": false, 00:10:38.609 "num_base_bdevs": 4, 00:10:38.609 "num_base_bdevs_discovered": 1, 00:10:38.609 "num_base_bdevs_operational": 4, 00:10:38.609 "base_bdevs_list": [ 00:10:38.609 { 00:10:38.609 "name": "BaseBdev1", 00:10:38.609 "uuid": "00356f3b-fee8-422d-9964-717818f58241", 00:10:38.609 "is_configured": true, 00:10:38.609 "data_offset": 0, 00:10:38.609 "data_size": 65536 00:10:38.609 }, 00:10:38.609 { 00:10:38.609 "name": "BaseBdev2", 00:10:38.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.609 "is_configured": false, 00:10:38.609 "data_offset": 0, 00:10:38.609 "data_size": 0 00:10:38.609 }, 00:10:38.609 { 00:10:38.609 "name": "BaseBdev3", 00:10:38.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.609 "is_configured": false, 00:10:38.609 "data_offset": 0, 00:10:38.609 "data_size": 0 00:10:38.609 }, 00:10:38.609 { 00:10:38.609 "name": "BaseBdev4", 00:10:38.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.609 "is_configured": false, 00:10:38.609 "data_offset": 0, 00:10:38.609 "data_size": 0 00:10:38.609 } 00:10:38.609 ] 00:10:38.609 }' 00:10:38.609 20:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.609 20:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.869 [2024-11-26 20:23:32.383406] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.869 [2024-11-26 20:23:32.383490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.869 [2024-11-26 20:23:32.395462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.869 [2024-11-26 20:23:32.397875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.869 [2024-11-26 20:23:32.397940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.869 [2024-11-26 20:23:32.397955] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.869 [2024-11-26 20:23:32.397969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.869 [2024-11-26 20:23:32.397980] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.869 [2024-11-26 20:23:32.397993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.869 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.128 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.128 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.128 "name": "Existed_Raid", 00:10:39.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.128 "strip_size_kb": 64, 00:10:39.128 "state": "configuring", 00:10:39.128 "raid_level": "raid0", 00:10:39.128 "superblock": false, 00:10:39.128 "num_base_bdevs": 4, 00:10:39.128 "num_base_bdevs_discovered": 1, 00:10:39.128 "num_base_bdevs_operational": 4, 00:10:39.128 "base_bdevs_list": [ 00:10:39.128 { 00:10:39.128 "name": "BaseBdev1", 00:10:39.128 "uuid": "00356f3b-fee8-422d-9964-717818f58241", 00:10:39.129 "is_configured": true, 00:10:39.129 "data_offset": 0, 00:10:39.129 "data_size": 65536 00:10:39.129 }, 00:10:39.129 { 00:10:39.129 "name": "BaseBdev2", 00:10:39.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.129 "is_configured": false, 00:10:39.129 "data_offset": 0, 00:10:39.129 "data_size": 0 00:10:39.129 }, 00:10:39.129 { 00:10:39.129 "name": "BaseBdev3", 00:10:39.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.129 "is_configured": false, 00:10:39.129 "data_offset": 0, 00:10:39.129 "data_size": 0 00:10:39.129 }, 00:10:39.129 { 00:10:39.129 "name": "BaseBdev4", 00:10:39.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.129 "is_configured": false, 00:10:39.129 "data_offset": 0, 00:10:39.129 "data_size": 0 00:10:39.129 } 00:10:39.129 ] 00:10:39.129 }' 00:10:39.129 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.129 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.388 [2024-11-26 20:23:32.842571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.388 BaseBdev2 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.388 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.388 [ 00:10:39.388 { 00:10:39.388 "name": "BaseBdev2", 00:10:39.388 "aliases": [ 00:10:39.388 "d8474861-eb1e-4551-b4f9-29d5de68b50e" 00:10:39.388 ], 00:10:39.388 "product_name": "Malloc disk", 00:10:39.388 "block_size": 512, 00:10:39.388 "num_blocks": 65536, 00:10:39.388 "uuid": "d8474861-eb1e-4551-b4f9-29d5de68b50e", 00:10:39.388 "assigned_rate_limits": { 00:10:39.388 "rw_ios_per_sec": 0, 00:10:39.388 "rw_mbytes_per_sec": 0, 00:10:39.388 "r_mbytes_per_sec": 0, 00:10:39.388 "w_mbytes_per_sec": 0 00:10:39.388 }, 00:10:39.388 "claimed": true, 00:10:39.388 "claim_type": "exclusive_write", 00:10:39.388 "zoned": false, 00:10:39.388 "supported_io_types": { 00:10:39.388 "read": true, 00:10:39.388 "write": true, 00:10:39.388 "unmap": true, 00:10:39.388 "flush": true, 00:10:39.388 "reset": true, 00:10:39.388 "nvme_admin": false, 00:10:39.388 "nvme_io": false, 00:10:39.388 "nvme_io_md": false, 00:10:39.388 "write_zeroes": true, 00:10:39.388 "zcopy": true, 00:10:39.388 "get_zone_info": false, 00:10:39.388 "zone_management": false, 00:10:39.388 "zone_append": false, 00:10:39.388 "compare": false, 00:10:39.388 "compare_and_write": false, 00:10:39.388 "abort": true, 00:10:39.388 "seek_hole": false, 00:10:39.388 "seek_data": false, 00:10:39.388 "copy": true, 00:10:39.388 "nvme_iov_md": false 00:10:39.388 }, 00:10:39.388 "memory_domains": [ 00:10:39.388 { 00:10:39.388 "dma_device_id": "system", 00:10:39.388 "dma_device_type": 1 00:10:39.388 }, 00:10:39.388 { 00:10:39.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.389 "dma_device_type": 2 00:10:39.389 } 00:10:39.389 ], 00:10:39.389 "driver_specific": {} 00:10:39.389 } 00:10:39.389 ] 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.389 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.389 "name": "Existed_Raid", 00:10:39.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.389 "strip_size_kb": 64, 00:10:39.389 "state": "configuring", 00:10:39.389 "raid_level": "raid0", 00:10:39.389 "superblock": false, 00:10:39.389 "num_base_bdevs": 4, 00:10:39.389 "num_base_bdevs_discovered": 2, 00:10:39.389 "num_base_bdevs_operational": 4, 00:10:39.389 "base_bdevs_list": [ 00:10:39.389 { 00:10:39.389 "name": "BaseBdev1", 00:10:39.389 "uuid": "00356f3b-fee8-422d-9964-717818f58241", 00:10:39.389 "is_configured": true, 00:10:39.389 "data_offset": 0, 00:10:39.389 "data_size": 65536 00:10:39.389 }, 00:10:39.389 { 00:10:39.389 "name": "BaseBdev2", 00:10:39.389 "uuid": "d8474861-eb1e-4551-b4f9-29d5de68b50e", 00:10:39.389 "is_configured": true, 00:10:39.389 "data_offset": 0, 00:10:39.389 "data_size": 65536 00:10:39.389 }, 00:10:39.389 { 00:10:39.389 "name": "BaseBdev3", 00:10:39.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.389 "is_configured": false, 00:10:39.389 "data_offset": 0, 00:10:39.389 "data_size": 0 00:10:39.389 }, 00:10:39.389 { 00:10:39.389 "name": "BaseBdev4", 00:10:39.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.389 "is_configured": false, 00:10:39.389 "data_offset": 0, 00:10:39.389 "data_size": 0 00:10:39.389 } 00:10:39.389 ] 00:10:39.389 }' 00:10:39.657 20:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.657 20:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.944 [2024-11-26 20:23:33.379291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.944 BaseBdev3 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.944 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.944 [ 00:10:39.944 { 00:10:39.944 "name": "BaseBdev3", 00:10:39.944 "aliases": [ 00:10:39.944 "0cc1f6a6-755a-4570-abf6-09eb4b03c801" 00:10:39.944 ], 00:10:39.944 "product_name": "Malloc disk", 00:10:39.944 "block_size": 512, 00:10:39.944 "num_blocks": 65536, 00:10:39.944 "uuid": "0cc1f6a6-755a-4570-abf6-09eb4b03c801", 00:10:39.944 "assigned_rate_limits": { 00:10:39.944 "rw_ios_per_sec": 0, 00:10:39.944 "rw_mbytes_per_sec": 0, 00:10:39.944 "r_mbytes_per_sec": 0, 00:10:39.944 "w_mbytes_per_sec": 0 00:10:39.944 }, 00:10:39.944 "claimed": true, 00:10:39.944 "claim_type": "exclusive_write", 00:10:39.944 "zoned": false, 00:10:39.944 "supported_io_types": { 00:10:39.944 "read": true, 00:10:39.944 "write": true, 00:10:39.944 "unmap": true, 00:10:39.944 "flush": true, 00:10:39.944 "reset": true, 00:10:39.944 "nvme_admin": false, 00:10:39.944 "nvme_io": false, 00:10:39.944 "nvme_io_md": false, 00:10:39.944 "write_zeroes": true, 00:10:39.944 "zcopy": true, 00:10:39.945 "get_zone_info": false, 00:10:39.945 "zone_management": false, 00:10:39.945 "zone_append": false, 00:10:39.945 "compare": false, 00:10:39.945 "compare_and_write": false, 00:10:39.945 "abort": true, 00:10:39.945 "seek_hole": false, 00:10:39.945 "seek_data": false, 00:10:39.945 "copy": true, 00:10:39.945 "nvme_iov_md": false 00:10:39.945 }, 00:10:39.945 "memory_domains": [ 00:10:39.945 { 00:10:39.945 "dma_device_id": "system", 00:10:39.945 "dma_device_type": 1 00:10:39.945 }, 00:10:39.945 { 00:10:39.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.945 "dma_device_type": 2 00:10:39.945 } 00:10:39.945 ], 00:10:39.945 "driver_specific": {} 00:10:39.945 } 00:10:39.945 ] 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.945 "name": "Existed_Raid", 00:10:39.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.945 "strip_size_kb": 64, 00:10:39.945 "state": "configuring", 00:10:39.945 "raid_level": "raid0", 00:10:39.945 "superblock": false, 00:10:39.945 "num_base_bdevs": 4, 00:10:39.945 "num_base_bdevs_discovered": 3, 00:10:39.945 "num_base_bdevs_operational": 4, 00:10:39.945 "base_bdevs_list": [ 00:10:39.945 { 00:10:39.945 "name": "BaseBdev1", 00:10:39.945 "uuid": "00356f3b-fee8-422d-9964-717818f58241", 00:10:39.945 "is_configured": true, 00:10:39.945 "data_offset": 0, 00:10:39.945 "data_size": 65536 00:10:39.945 }, 00:10:39.945 { 00:10:39.945 "name": "BaseBdev2", 00:10:39.945 "uuid": "d8474861-eb1e-4551-b4f9-29d5de68b50e", 00:10:39.945 "is_configured": true, 00:10:39.945 "data_offset": 0, 00:10:39.945 "data_size": 65536 00:10:39.945 }, 00:10:39.945 { 00:10:39.945 "name": "BaseBdev3", 00:10:39.945 "uuid": "0cc1f6a6-755a-4570-abf6-09eb4b03c801", 00:10:39.945 "is_configured": true, 00:10:39.945 "data_offset": 0, 00:10:39.945 "data_size": 65536 00:10:39.945 }, 00:10:39.945 { 00:10:39.945 "name": "BaseBdev4", 00:10:39.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.945 "is_configured": false, 00:10:39.945 "data_offset": 0, 00:10:39.945 "data_size": 0 00:10:39.945 } 00:10:39.945 ] 00:10:39.945 }' 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.945 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.516 [2024-11-26 20:23:33.907069] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.516 [2024-11-26 20:23:33.907243] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:40.516 [2024-11-26 20:23:33.907295] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:40.516 [2024-11-26 20:23:33.907720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:40.516 [2024-11-26 20:23:33.907986] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:40.516 [2024-11-26 20:23:33.908050] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:40.516 [2024-11-26 20:23:33.908390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.516 BaseBdev4 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.516 [ 00:10:40.516 { 00:10:40.516 "name": "BaseBdev4", 00:10:40.516 "aliases": [ 00:10:40.516 "1b347d4f-6c52-4686-a75f-bef773a7e62b" 00:10:40.516 ], 00:10:40.516 "product_name": "Malloc disk", 00:10:40.516 "block_size": 512, 00:10:40.516 "num_blocks": 65536, 00:10:40.516 "uuid": "1b347d4f-6c52-4686-a75f-bef773a7e62b", 00:10:40.516 "assigned_rate_limits": { 00:10:40.516 "rw_ios_per_sec": 0, 00:10:40.516 "rw_mbytes_per_sec": 0, 00:10:40.516 "r_mbytes_per_sec": 0, 00:10:40.516 "w_mbytes_per_sec": 0 00:10:40.516 }, 00:10:40.516 "claimed": true, 00:10:40.516 "claim_type": "exclusive_write", 00:10:40.516 "zoned": false, 00:10:40.516 "supported_io_types": { 00:10:40.516 "read": true, 00:10:40.516 "write": true, 00:10:40.516 "unmap": true, 00:10:40.516 "flush": true, 00:10:40.516 "reset": true, 00:10:40.516 "nvme_admin": false, 00:10:40.516 "nvme_io": false, 00:10:40.516 "nvme_io_md": false, 00:10:40.516 "write_zeroes": true, 00:10:40.516 "zcopy": true, 00:10:40.516 "get_zone_info": false, 00:10:40.516 "zone_management": false, 00:10:40.516 "zone_append": false, 00:10:40.516 "compare": false, 00:10:40.516 "compare_and_write": false, 00:10:40.516 "abort": true, 00:10:40.516 "seek_hole": false, 00:10:40.516 "seek_data": false, 00:10:40.516 "copy": true, 00:10:40.516 "nvme_iov_md": false 00:10:40.516 }, 00:10:40.516 "memory_domains": [ 00:10:40.516 { 00:10:40.516 "dma_device_id": "system", 00:10:40.516 "dma_device_type": 1 00:10:40.516 }, 00:10:40.516 { 00:10:40.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.516 "dma_device_type": 2 00:10:40.516 } 00:10:40.516 ], 00:10:40.516 "driver_specific": {} 00:10:40.516 } 00:10:40.516 ] 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.516 20:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.516 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.516 "name": "Existed_Raid", 00:10:40.516 "uuid": "a641167d-c6c4-4d93-bed3-26467f603c38", 00:10:40.516 "strip_size_kb": 64, 00:10:40.516 "state": "online", 00:10:40.516 "raid_level": "raid0", 00:10:40.516 "superblock": false, 00:10:40.516 "num_base_bdevs": 4, 00:10:40.516 "num_base_bdevs_discovered": 4, 00:10:40.516 "num_base_bdevs_operational": 4, 00:10:40.516 "base_bdevs_list": [ 00:10:40.516 { 00:10:40.516 "name": "BaseBdev1", 00:10:40.516 "uuid": "00356f3b-fee8-422d-9964-717818f58241", 00:10:40.516 "is_configured": true, 00:10:40.516 "data_offset": 0, 00:10:40.516 "data_size": 65536 00:10:40.516 }, 00:10:40.516 { 00:10:40.516 "name": "BaseBdev2", 00:10:40.516 "uuid": "d8474861-eb1e-4551-b4f9-29d5de68b50e", 00:10:40.516 "is_configured": true, 00:10:40.516 "data_offset": 0, 00:10:40.516 "data_size": 65536 00:10:40.516 }, 00:10:40.516 { 00:10:40.516 "name": "BaseBdev3", 00:10:40.516 "uuid": "0cc1f6a6-755a-4570-abf6-09eb4b03c801", 00:10:40.517 "is_configured": true, 00:10:40.517 "data_offset": 0, 00:10:40.517 "data_size": 65536 00:10:40.517 }, 00:10:40.517 { 00:10:40.517 "name": "BaseBdev4", 00:10:40.517 "uuid": "1b347d4f-6c52-4686-a75f-bef773a7e62b", 00:10:40.517 "is_configured": true, 00:10:40.517 "data_offset": 0, 00:10:40.517 "data_size": 65536 00:10:40.517 } 00:10:40.517 ] 00:10:40.517 }' 00:10:40.517 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.517 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.085 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.085 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.085 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.085 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.085 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.085 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.086 [2024-11-26 20:23:34.446738] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.086 "name": "Existed_Raid", 00:10:41.086 "aliases": [ 00:10:41.086 "a641167d-c6c4-4d93-bed3-26467f603c38" 00:10:41.086 ], 00:10:41.086 "product_name": "Raid Volume", 00:10:41.086 "block_size": 512, 00:10:41.086 "num_blocks": 262144, 00:10:41.086 "uuid": "a641167d-c6c4-4d93-bed3-26467f603c38", 00:10:41.086 "assigned_rate_limits": { 00:10:41.086 "rw_ios_per_sec": 0, 00:10:41.086 "rw_mbytes_per_sec": 0, 00:10:41.086 "r_mbytes_per_sec": 0, 00:10:41.086 "w_mbytes_per_sec": 0 00:10:41.086 }, 00:10:41.086 "claimed": false, 00:10:41.086 "zoned": false, 00:10:41.086 "supported_io_types": { 00:10:41.086 "read": true, 00:10:41.086 "write": true, 00:10:41.086 "unmap": true, 00:10:41.086 "flush": true, 00:10:41.086 "reset": true, 00:10:41.086 "nvme_admin": false, 00:10:41.086 "nvme_io": false, 00:10:41.086 "nvme_io_md": false, 00:10:41.086 "write_zeroes": true, 00:10:41.086 "zcopy": false, 00:10:41.086 "get_zone_info": false, 00:10:41.086 "zone_management": false, 00:10:41.086 "zone_append": false, 00:10:41.086 "compare": false, 00:10:41.086 "compare_and_write": false, 00:10:41.086 "abort": false, 00:10:41.086 "seek_hole": false, 00:10:41.086 "seek_data": false, 00:10:41.086 "copy": false, 00:10:41.086 "nvme_iov_md": false 00:10:41.086 }, 00:10:41.086 "memory_domains": [ 00:10:41.086 { 00:10:41.086 "dma_device_id": "system", 00:10:41.086 "dma_device_type": 1 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.086 "dma_device_type": 2 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "dma_device_id": "system", 00:10:41.086 "dma_device_type": 1 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.086 "dma_device_type": 2 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "dma_device_id": "system", 00:10:41.086 "dma_device_type": 1 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.086 "dma_device_type": 2 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "dma_device_id": "system", 00:10:41.086 "dma_device_type": 1 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.086 "dma_device_type": 2 00:10:41.086 } 00:10:41.086 ], 00:10:41.086 "driver_specific": { 00:10:41.086 "raid": { 00:10:41.086 "uuid": "a641167d-c6c4-4d93-bed3-26467f603c38", 00:10:41.086 "strip_size_kb": 64, 00:10:41.086 "state": "online", 00:10:41.086 "raid_level": "raid0", 00:10:41.086 "superblock": false, 00:10:41.086 "num_base_bdevs": 4, 00:10:41.086 "num_base_bdevs_discovered": 4, 00:10:41.086 "num_base_bdevs_operational": 4, 00:10:41.086 "base_bdevs_list": [ 00:10:41.086 { 00:10:41.086 "name": "BaseBdev1", 00:10:41.086 "uuid": "00356f3b-fee8-422d-9964-717818f58241", 00:10:41.086 "is_configured": true, 00:10:41.086 "data_offset": 0, 00:10:41.086 "data_size": 65536 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "name": "BaseBdev2", 00:10:41.086 "uuid": "d8474861-eb1e-4551-b4f9-29d5de68b50e", 00:10:41.086 "is_configured": true, 00:10:41.086 "data_offset": 0, 00:10:41.086 "data_size": 65536 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "name": "BaseBdev3", 00:10:41.086 "uuid": "0cc1f6a6-755a-4570-abf6-09eb4b03c801", 00:10:41.086 "is_configured": true, 00:10:41.086 "data_offset": 0, 00:10:41.086 "data_size": 65536 00:10:41.086 }, 00:10:41.086 { 00:10:41.086 "name": "BaseBdev4", 00:10:41.086 "uuid": "1b347d4f-6c52-4686-a75f-bef773a7e62b", 00:10:41.086 "is_configured": true, 00:10:41.086 "data_offset": 0, 00:10:41.086 "data_size": 65536 00:10:41.086 } 00:10:41.086 ] 00:10:41.086 } 00:10:41.086 } 00:10:41.086 }' 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:41.086 BaseBdev2 00:10:41.086 BaseBdev3 00:10:41.086 BaseBdev4' 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.086 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.345 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.346 [2024-11-26 20:23:34.797872] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.346 [2024-11-26 20:23:34.797922] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.346 [2024-11-26 20:23:34.798001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.346 "name": "Existed_Raid", 00:10:41.346 "uuid": "a641167d-c6c4-4d93-bed3-26467f603c38", 00:10:41.346 "strip_size_kb": 64, 00:10:41.346 "state": "offline", 00:10:41.346 "raid_level": "raid0", 00:10:41.346 "superblock": false, 00:10:41.346 "num_base_bdevs": 4, 00:10:41.346 "num_base_bdevs_discovered": 3, 00:10:41.346 "num_base_bdevs_operational": 3, 00:10:41.346 "base_bdevs_list": [ 00:10:41.346 { 00:10:41.346 "name": null, 00:10:41.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.346 "is_configured": false, 00:10:41.346 "data_offset": 0, 00:10:41.346 "data_size": 65536 00:10:41.346 }, 00:10:41.346 { 00:10:41.346 "name": "BaseBdev2", 00:10:41.346 "uuid": "d8474861-eb1e-4551-b4f9-29d5de68b50e", 00:10:41.346 "is_configured": true, 00:10:41.346 "data_offset": 0, 00:10:41.346 "data_size": 65536 00:10:41.346 }, 00:10:41.346 { 00:10:41.346 "name": "BaseBdev3", 00:10:41.346 "uuid": "0cc1f6a6-755a-4570-abf6-09eb4b03c801", 00:10:41.346 "is_configured": true, 00:10:41.346 "data_offset": 0, 00:10:41.346 "data_size": 65536 00:10:41.346 }, 00:10:41.346 { 00:10:41.346 "name": "BaseBdev4", 00:10:41.346 "uuid": "1b347d4f-6c52-4686-a75f-bef773a7e62b", 00:10:41.346 "is_configured": true, 00:10:41.346 "data_offset": 0, 00:10:41.346 "data_size": 65536 00:10:41.346 } 00:10:41.346 ] 00:10:41.346 }' 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.346 20:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.933 [2024-11-26 20:23:35.373851] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.933 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.934 [2024-11-26 20:23:35.442364] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.934 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.192 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.193 [2024-11-26 20:23:35.548879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:42.193 [2024-11-26 20:23:35.548946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.193 BaseBdev2 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.193 [ 00:10:42.193 { 00:10:42.193 "name": "BaseBdev2", 00:10:42.193 "aliases": [ 00:10:42.193 "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040" 00:10:42.193 ], 00:10:42.193 "product_name": "Malloc disk", 00:10:42.193 "block_size": 512, 00:10:42.193 "num_blocks": 65536, 00:10:42.193 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:42.193 "assigned_rate_limits": { 00:10:42.193 "rw_ios_per_sec": 0, 00:10:42.193 "rw_mbytes_per_sec": 0, 00:10:42.193 "r_mbytes_per_sec": 0, 00:10:42.193 "w_mbytes_per_sec": 0 00:10:42.193 }, 00:10:42.193 "claimed": false, 00:10:42.193 "zoned": false, 00:10:42.193 "supported_io_types": { 00:10:42.193 "read": true, 00:10:42.193 "write": true, 00:10:42.193 "unmap": true, 00:10:42.193 "flush": true, 00:10:42.193 "reset": true, 00:10:42.193 "nvme_admin": false, 00:10:42.193 "nvme_io": false, 00:10:42.193 "nvme_io_md": false, 00:10:42.193 "write_zeroes": true, 00:10:42.193 "zcopy": true, 00:10:42.193 "get_zone_info": false, 00:10:42.193 "zone_management": false, 00:10:42.193 "zone_append": false, 00:10:42.193 "compare": false, 00:10:42.193 "compare_and_write": false, 00:10:42.193 "abort": true, 00:10:42.193 "seek_hole": false, 00:10:42.193 "seek_data": false, 00:10:42.193 "copy": true, 00:10:42.193 "nvme_iov_md": false 00:10:42.193 }, 00:10:42.193 "memory_domains": [ 00:10:42.193 { 00:10:42.193 "dma_device_id": "system", 00:10:42.193 "dma_device_type": 1 00:10:42.193 }, 00:10:42.193 { 00:10:42.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.193 "dma_device_type": 2 00:10:42.193 } 00:10:42.193 ], 00:10:42.193 "driver_specific": {} 00:10:42.193 } 00:10:42.193 ] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.193 BaseBdev3 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.193 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.193 [ 00:10:42.193 { 00:10:42.193 "name": "BaseBdev3", 00:10:42.193 "aliases": [ 00:10:42.193 "40f47fb0-35e1-4fa1-aaa8-f53030833f94" 00:10:42.193 ], 00:10:42.193 "product_name": "Malloc disk", 00:10:42.193 "block_size": 512, 00:10:42.193 "num_blocks": 65536, 00:10:42.193 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:42.193 "assigned_rate_limits": { 00:10:42.193 "rw_ios_per_sec": 0, 00:10:42.193 "rw_mbytes_per_sec": 0, 00:10:42.193 "r_mbytes_per_sec": 0, 00:10:42.193 "w_mbytes_per_sec": 0 00:10:42.194 }, 00:10:42.194 "claimed": false, 00:10:42.194 "zoned": false, 00:10:42.194 "supported_io_types": { 00:10:42.194 "read": true, 00:10:42.194 "write": true, 00:10:42.194 "unmap": true, 00:10:42.194 "flush": true, 00:10:42.194 "reset": true, 00:10:42.194 "nvme_admin": false, 00:10:42.194 "nvme_io": false, 00:10:42.194 "nvme_io_md": false, 00:10:42.194 "write_zeroes": true, 00:10:42.194 "zcopy": true, 00:10:42.194 "get_zone_info": false, 00:10:42.194 "zone_management": false, 00:10:42.194 "zone_append": false, 00:10:42.194 "compare": false, 00:10:42.194 "compare_and_write": false, 00:10:42.194 "abort": true, 00:10:42.194 "seek_hole": false, 00:10:42.194 "seek_data": false, 00:10:42.194 "copy": true, 00:10:42.194 "nvme_iov_md": false 00:10:42.194 }, 00:10:42.194 "memory_domains": [ 00:10:42.194 { 00:10:42.194 "dma_device_id": "system", 00:10:42.194 "dma_device_type": 1 00:10:42.194 }, 00:10:42.194 { 00:10:42.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.194 "dma_device_type": 2 00:10:42.194 } 00:10:42.194 ], 00:10:42.194 "driver_specific": {} 00:10:42.194 } 00:10:42.194 ] 00:10:42.194 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.194 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:42.194 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.194 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.194 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:42.194 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.194 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.452 BaseBdev4 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.452 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.452 [ 00:10:42.452 { 00:10:42.452 "name": "BaseBdev4", 00:10:42.452 "aliases": [ 00:10:42.452 "8680267b-dc1c-45b2-982a-ca88b7498acd" 00:10:42.453 ], 00:10:42.453 "product_name": "Malloc disk", 00:10:42.453 "block_size": 512, 00:10:42.453 "num_blocks": 65536, 00:10:42.453 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:42.453 "assigned_rate_limits": { 00:10:42.453 "rw_ios_per_sec": 0, 00:10:42.453 "rw_mbytes_per_sec": 0, 00:10:42.453 "r_mbytes_per_sec": 0, 00:10:42.453 "w_mbytes_per_sec": 0 00:10:42.453 }, 00:10:42.453 "claimed": false, 00:10:42.453 "zoned": false, 00:10:42.453 "supported_io_types": { 00:10:42.453 "read": true, 00:10:42.453 "write": true, 00:10:42.453 "unmap": true, 00:10:42.453 "flush": true, 00:10:42.453 "reset": true, 00:10:42.453 "nvme_admin": false, 00:10:42.453 "nvme_io": false, 00:10:42.453 "nvme_io_md": false, 00:10:42.453 "write_zeroes": true, 00:10:42.453 "zcopy": true, 00:10:42.453 "get_zone_info": false, 00:10:42.453 "zone_management": false, 00:10:42.453 "zone_append": false, 00:10:42.453 "compare": false, 00:10:42.453 "compare_and_write": false, 00:10:42.453 "abort": true, 00:10:42.453 "seek_hole": false, 00:10:42.453 "seek_data": false, 00:10:42.453 "copy": true, 00:10:42.453 "nvme_iov_md": false 00:10:42.453 }, 00:10:42.453 "memory_domains": [ 00:10:42.453 { 00:10:42.453 "dma_device_id": "system", 00:10:42.453 "dma_device_type": 1 00:10:42.453 }, 00:10:42.453 { 00:10:42.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.453 "dma_device_type": 2 00:10:42.453 } 00:10:42.453 ], 00:10:42.453 "driver_specific": {} 00:10:42.453 } 00:10:42.453 ] 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.453 [2024-11-26 20:23:35.799535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.453 [2024-11-26 20:23:35.799631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.453 [2024-11-26 20:23:35.799669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.453 [2024-11-26 20:23:35.802025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.453 [2024-11-26 20:23:35.802107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.453 "name": "Existed_Raid", 00:10:42.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.453 "strip_size_kb": 64, 00:10:42.453 "state": "configuring", 00:10:42.453 "raid_level": "raid0", 00:10:42.453 "superblock": false, 00:10:42.453 "num_base_bdevs": 4, 00:10:42.453 "num_base_bdevs_discovered": 3, 00:10:42.453 "num_base_bdevs_operational": 4, 00:10:42.453 "base_bdevs_list": [ 00:10:42.453 { 00:10:42.453 "name": "BaseBdev1", 00:10:42.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.453 "is_configured": false, 00:10:42.453 "data_offset": 0, 00:10:42.453 "data_size": 0 00:10:42.453 }, 00:10:42.453 { 00:10:42.453 "name": "BaseBdev2", 00:10:42.453 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:42.453 "is_configured": true, 00:10:42.453 "data_offset": 0, 00:10:42.453 "data_size": 65536 00:10:42.453 }, 00:10:42.453 { 00:10:42.453 "name": "BaseBdev3", 00:10:42.453 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:42.453 "is_configured": true, 00:10:42.453 "data_offset": 0, 00:10:42.453 "data_size": 65536 00:10:42.453 }, 00:10:42.453 { 00:10:42.453 "name": "BaseBdev4", 00:10:42.453 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:42.453 "is_configured": true, 00:10:42.453 "data_offset": 0, 00:10:42.453 "data_size": 65536 00:10:42.453 } 00:10:42.453 ] 00:10:42.453 }' 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.453 20:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.712 [2024-11-26 20:23:36.234816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.712 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.970 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.970 "name": "Existed_Raid", 00:10:42.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.970 "strip_size_kb": 64, 00:10:42.970 "state": "configuring", 00:10:42.970 "raid_level": "raid0", 00:10:42.970 "superblock": false, 00:10:42.970 "num_base_bdevs": 4, 00:10:42.970 "num_base_bdevs_discovered": 2, 00:10:42.970 "num_base_bdevs_operational": 4, 00:10:42.970 "base_bdevs_list": [ 00:10:42.970 { 00:10:42.970 "name": "BaseBdev1", 00:10:42.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.970 "is_configured": false, 00:10:42.970 "data_offset": 0, 00:10:42.970 "data_size": 0 00:10:42.970 }, 00:10:42.970 { 00:10:42.970 "name": null, 00:10:42.970 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:42.970 "is_configured": false, 00:10:42.970 "data_offset": 0, 00:10:42.970 "data_size": 65536 00:10:42.970 }, 00:10:42.970 { 00:10:42.970 "name": "BaseBdev3", 00:10:42.970 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:42.970 "is_configured": true, 00:10:42.970 "data_offset": 0, 00:10:42.970 "data_size": 65536 00:10:42.970 }, 00:10:42.970 { 00:10:42.970 "name": "BaseBdev4", 00:10:42.970 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:42.970 "is_configured": true, 00:10:42.970 "data_offset": 0, 00:10:42.970 "data_size": 65536 00:10:42.970 } 00:10:42.970 ] 00:10:42.970 }' 00:10:42.970 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.970 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.228 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.487 [2024-11-26 20:23:36.787076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.487 BaseBdev1 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.487 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.487 [ 00:10:43.487 { 00:10:43.487 "name": "BaseBdev1", 00:10:43.487 "aliases": [ 00:10:43.487 "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2" 00:10:43.487 ], 00:10:43.487 "product_name": "Malloc disk", 00:10:43.487 "block_size": 512, 00:10:43.487 "num_blocks": 65536, 00:10:43.487 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:43.487 "assigned_rate_limits": { 00:10:43.487 "rw_ios_per_sec": 0, 00:10:43.487 "rw_mbytes_per_sec": 0, 00:10:43.487 "r_mbytes_per_sec": 0, 00:10:43.487 "w_mbytes_per_sec": 0 00:10:43.487 }, 00:10:43.487 "claimed": true, 00:10:43.487 "claim_type": "exclusive_write", 00:10:43.487 "zoned": false, 00:10:43.487 "supported_io_types": { 00:10:43.487 "read": true, 00:10:43.487 "write": true, 00:10:43.487 "unmap": true, 00:10:43.487 "flush": true, 00:10:43.487 "reset": true, 00:10:43.487 "nvme_admin": false, 00:10:43.487 "nvme_io": false, 00:10:43.487 "nvme_io_md": false, 00:10:43.487 "write_zeroes": true, 00:10:43.487 "zcopy": true, 00:10:43.487 "get_zone_info": false, 00:10:43.487 "zone_management": false, 00:10:43.487 "zone_append": false, 00:10:43.487 "compare": false, 00:10:43.487 "compare_and_write": false, 00:10:43.487 "abort": true, 00:10:43.487 "seek_hole": false, 00:10:43.487 "seek_data": false, 00:10:43.487 "copy": true, 00:10:43.487 "nvme_iov_md": false 00:10:43.487 }, 00:10:43.487 "memory_domains": [ 00:10:43.487 { 00:10:43.487 "dma_device_id": "system", 00:10:43.487 "dma_device_type": 1 00:10:43.487 }, 00:10:43.487 { 00:10:43.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.487 "dma_device_type": 2 00:10:43.488 } 00:10:43.488 ], 00:10:43.488 "driver_specific": {} 00:10:43.488 } 00:10:43.488 ] 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.488 "name": "Existed_Raid", 00:10:43.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.488 "strip_size_kb": 64, 00:10:43.488 "state": "configuring", 00:10:43.488 "raid_level": "raid0", 00:10:43.488 "superblock": false, 00:10:43.488 "num_base_bdevs": 4, 00:10:43.488 "num_base_bdevs_discovered": 3, 00:10:43.488 "num_base_bdevs_operational": 4, 00:10:43.488 "base_bdevs_list": [ 00:10:43.488 { 00:10:43.488 "name": "BaseBdev1", 00:10:43.488 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:43.488 "is_configured": true, 00:10:43.488 "data_offset": 0, 00:10:43.488 "data_size": 65536 00:10:43.488 }, 00:10:43.488 { 00:10:43.488 "name": null, 00:10:43.488 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:43.488 "is_configured": false, 00:10:43.488 "data_offset": 0, 00:10:43.488 "data_size": 65536 00:10:43.488 }, 00:10:43.488 { 00:10:43.488 "name": "BaseBdev3", 00:10:43.488 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:43.488 "is_configured": true, 00:10:43.488 "data_offset": 0, 00:10:43.488 "data_size": 65536 00:10:43.488 }, 00:10:43.488 { 00:10:43.488 "name": "BaseBdev4", 00:10:43.488 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:43.488 "is_configured": true, 00:10:43.488 "data_offset": 0, 00:10:43.488 "data_size": 65536 00:10:43.488 } 00:10:43.488 ] 00:10:43.488 }' 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.488 20:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.747 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.747 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.747 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.747 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.747 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.007 [2024-11-26 20:23:37.322303] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.007 "name": "Existed_Raid", 00:10:44.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.007 "strip_size_kb": 64, 00:10:44.007 "state": "configuring", 00:10:44.007 "raid_level": "raid0", 00:10:44.007 "superblock": false, 00:10:44.007 "num_base_bdevs": 4, 00:10:44.007 "num_base_bdevs_discovered": 2, 00:10:44.007 "num_base_bdevs_operational": 4, 00:10:44.007 "base_bdevs_list": [ 00:10:44.007 { 00:10:44.007 "name": "BaseBdev1", 00:10:44.007 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:44.007 "is_configured": true, 00:10:44.007 "data_offset": 0, 00:10:44.007 "data_size": 65536 00:10:44.007 }, 00:10:44.007 { 00:10:44.007 "name": null, 00:10:44.007 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:44.007 "is_configured": false, 00:10:44.007 "data_offset": 0, 00:10:44.007 "data_size": 65536 00:10:44.007 }, 00:10:44.007 { 00:10:44.007 "name": null, 00:10:44.007 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:44.007 "is_configured": false, 00:10:44.007 "data_offset": 0, 00:10:44.007 "data_size": 65536 00:10:44.007 }, 00:10:44.007 { 00:10:44.007 "name": "BaseBdev4", 00:10:44.007 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:44.007 "is_configured": true, 00:10:44.007 "data_offset": 0, 00:10:44.007 "data_size": 65536 00:10:44.007 } 00:10:44.007 ] 00:10:44.007 }' 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.007 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.266 [2024-11-26 20:23:37.797628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.266 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.524 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.524 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.524 "name": "Existed_Raid", 00:10:44.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.524 "strip_size_kb": 64, 00:10:44.524 "state": "configuring", 00:10:44.524 "raid_level": "raid0", 00:10:44.524 "superblock": false, 00:10:44.524 "num_base_bdevs": 4, 00:10:44.524 "num_base_bdevs_discovered": 3, 00:10:44.524 "num_base_bdevs_operational": 4, 00:10:44.524 "base_bdevs_list": [ 00:10:44.524 { 00:10:44.525 "name": "BaseBdev1", 00:10:44.525 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:44.525 "is_configured": true, 00:10:44.525 "data_offset": 0, 00:10:44.525 "data_size": 65536 00:10:44.525 }, 00:10:44.525 { 00:10:44.525 "name": null, 00:10:44.525 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:44.525 "is_configured": false, 00:10:44.525 "data_offset": 0, 00:10:44.525 "data_size": 65536 00:10:44.525 }, 00:10:44.525 { 00:10:44.525 "name": "BaseBdev3", 00:10:44.525 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:44.525 "is_configured": true, 00:10:44.525 "data_offset": 0, 00:10:44.525 "data_size": 65536 00:10:44.525 }, 00:10:44.525 { 00:10:44.525 "name": "BaseBdev4", 00:10:44.525 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:44.525 "is_configured": true, 00:10:44.525 "data_offset": 0, 00:10:44.525 "data_size": 65536 00:10:44.525 } 00:10:44.525 ] 00:10:44.525 }' 00:10:44.525 20:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.525 20:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.784 [2024-11-26 20:23:38.240894] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.784 "name": "Existed_Raid", 00:10:44.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.784 "strip_size_kb": 64, 00:10:44.784 "state": "configuring", 00:10:44.784 "raid_level": "raid0", 00:10:44.784 "superblock": false, 00:10:44.784 "num_base_bdevs": 4, 00:10:44.784 "num_base_bdevs_discovered": 2, 00:10:44.784 "num_base_bdevs_operational": 4, 00:10:44.784 "base_bdevs_list": [ 00:10:44.784 { 00:10:44.784 "name": null, 00:10:44.784 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:44.784 "is_configured": false, 00:10:44.784 "data_offset": 0, 00:10:44.784 "data_size": 65536 00:10:44.784 }, 00:10:44.784 { 00:10:44.784 "name": null, 00:10:44.784 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:44.784 "is_configured": false, 00:10:44.784 "data_offset": 0, 00:10:44.784 "data_size": 65536 00:10:44.784 }, 00:10:44.784 { 00:10:44.784 "name": "BaseBdev3", 00:10:44.784 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:44.784 "is_configured": true, 00:10:44.784 "data_offset": 0, 00:10:44.784 "data_size": 65536 00:10:44.784 }, 00:10:44.784 { 00:10:44.784 "name": "BaseBdev4", 00:10:44.784 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:44.784 "is_configured": true, 00:10:44.784 "data_offset": 0, 00:10:44.784 "data_size": 65536 00:10:44.784 } 00:10:44.784 ] 00:10:44.784 }' 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.784 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 [2024-11-26 20:23:38.743643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.353 "name": "Existed_Raid", 00:10:45.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.353 "strip_size_kb": 64, 00:10:45.353 "state": "configuring", 00:10:45.353 "raid_level": "raid0", 00:10:45.353 "superblock": false, 00:10:45.353 "num_base_bdevs": 4, 00:10:45.353 "num_base_bdevs_discovered": 3, 00:10:45.353 "num_base_bdevs_operational": 4, 00:10:45.353 "base_bdevs_list": [ 00:10:45.353 { 00:10:45.353 "name": null, 00:10:45.353 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:45.353 "is_configured": false, 00:10:45.353 "data_offset": 0, 00:10:45.353 "data_size": 65536 00:10:45.353 }, 00:10:45.353 { 00:10:45.353 "name": "BaseBdev2", 00:10:45.353 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:45.353 "is_configured": true, 00:10:45.353 "data_offset": 0, 00:10:45.353 "data_size": 65536 00:10:45.353 }, 00:10:45.353 { 00:10:45.353 "name": "BaseBdev3", 00:10:45.353 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:45.353 "is_configured": true, 00:10:45.353 "data_offset": 0, 00:10:45.353 "data_size": 65536 00:10:45.353 }, 00:10:45.353 { 00:10:45.353 "name": "BaseBdev4", 00:10:45.353 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:45.353 "is_configured": true, 00:10:45.353 "data_offset": 0, 00:10:45.353 "data_size": 65536 00:10:45.353 } 00:10:45.353 ] 00:10:45.353 }' 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.353 20:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 72eb8513-ea6b-4aa5-b97c-91a5f4236ad2 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.920 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.920 [2024-11-26 20:23:39.323748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.920 [2024-11-26 20:23:39.323815] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:45.920 [2024-11-26 20:23:39.323826] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:45.920 [2024-11-26 20:23:39.324155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:45.921 [2024-11-26 20:23:39.324325] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:45.921 [2024-11-26 20:23:39.324352] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:45.921 [2024-11-26 20:23:39.324601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.921 NewBaseBdev 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.921 [ 00:10:45.921 { 00:10:45.921 "name": "NewBaseBdev", 00:10:45.921 "aliases": [ 00:10:45.921 "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2" 00:10:45.921 ], 00:10:45.921 "product_name": "Malloc disk", 00:10:45.921 "block_size": 512, 00:10:45.921 "num_blocks": 65536, 00:10:45.921 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:45.921 "assigned_rate_limits": { 00:10:45.921 "rw_ios_per_sec": 0, 00:10:45.921 "rw_mbytes_per_sec": 0, 00:10:45.921 "r_mbytes_per_sec": 0, 00:10:45.921 "w_mbytes_per_sec": 0 00:10:45.921 }, 00:10:45.921 "claimed": true, 00:10:45.921 "claim_type": "exclusive_write", 00:10:45.921 "zoned": false, 00:10:45.921 "supported_io_types": { 00:10:45.921 "read": true, 00:10:45.921 "write": true, 00:10:45.921 "unmap": true, 00:10:45.921 "flush": true, 00:10:45.921 "reset": true, 00:10:45.921 "nvme_admin": false, 00:10:45.921 "nvme_io": false, 00:10:45.921 "nvme_io_md": false, 00:10:45.921 "write_zeroes": true, 00:10:45.921 "zcopy": true, 00:10:45.921 "get_zone_info": false, 00:10:45.921 "zone_management": false, 00:10:45.921 "zone_append": false, 00:10:45.921 "compare": false, 00:10:45.921 "compare_and_write": false, 00:10:45.921 "abort": true, 00:10:45.921 "seek_hole": false, 00:10:45.921 "seek_data": false, 00:10:45.921 "copy": true, 00:10:45.921 "nvme_iov_md": false 00:10:45.921 }, 00:10:45.921 "memory_domains": [ 00:10:45.921 { 00:10:45.921 "dma_device_id": "system", 00:10:45.921 "dma_device_type": 1 00:10:45.921 }, 00:10:45.921 { 00:10:45.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.921 "dma_device_type": 2 00:10:45.921 } 00:10:45.921 ], 00:10:45.921 "driver_specific": {} 00:10:45.921 } 00:10:45.921 ] 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.921 "name": "Existed_Raid", 00:10:45.921 "uuid": "368d3097-01f6-4e53-8359-5127d25b9da0", 00:10:45.921 "strip_size_kb": 64, 00:10:45.921 "state": "online", 00:10:45.921 "raid_level": "raid0", 00:10:45.921 "superblock": false, 00:10:45.921 "num_base_bdevs": 4, 00:10:45.921 "num_base_bdevs_discovered": 4, 00:10:45.921 "num_base_bdevs_operational": 4, 00:10:45.921 "base_bdevs_list": [ 00:10:45.921 { 00:10:45.921 "name": "NewBaseBdev", 00:10:45.921 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:45.921 "is_configured": true, 00:10:45.921 "data_offset": 0, 00:10:45.921 "data_size": 65536 00:10:45.921 }, 00:10:45.921 { 00:10:45.921 "name": "BaseBdev2", 00:10:45.921 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:45.921 "is_configured": true, 00:10:45.921 "data_offset": 0, 00:10:45.921 "data_size": 65536 00:10:45.921 }, 00:10:45.921 { 00:10:45.921 "name": "BaseBdev3", 00:10:45.921 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:45.921 "is_configured": true, 00:10:45.921 "data_offset": 0, 00:10:45.921 "data_size": 65536 00:10:45.921 }, 00:10:45.921 { 00:10:45.921 "name": "BaseBdev4", 00:10:45.921 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:45.921 "is_configured": true, 00:10:45.921 "data_offset": 0, 00:10:45.921 "data_size": 65536 00:10:45.921 } 00:10:45.921 ] 00:10:45.921 }' 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.921 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.489 [2024-11-26 20:23:39.867379] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.489 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.489 "name": "Existed_Raid", 00:10:46.489 "aliases": [ 00:10:46.489 "368d3097-01f6-4e53-8359-5127d25b9da0" 00:10:46.489 ], 00:10:46.489 "product_name": "Raid Volume", 00:10:46.489 "block_size": 512, 00:10:46.489 "num_blocks": 262144, 00:10:46.489 "uuid": "368d3097-01f6-4e53-8359-5127d25b9da0", 00:10:46.489 "assigned_rate_limits": { 00:10:46.489 "rw_ios_per_sec": 0, 00:10:46.489 "rw_mbytes_per_sec": 0, 00:10:46.489 "r_mbytes_per_sec": 0, 00:10:46.489 "w_mbytes_per_sec": 0 00:10:46.489 }, 00:10:46.489 "claimed": false, 00:10:46.489 "zoned": false, 00:10:46.489 "supported_io_types": { 00:10:46.489 "read": true, 00:10:46.489 "write": true, 00:10:46.489 "unmap": true, 00:10:46.489 "flush": true, 00:10:46.489 "reset": true, 00:10:46.489 "nvme_admin": false, 00:10:46.489 "nvme_io": false, 00:10:46.489 "nvme_io_md": false, 00:10:46.489 "write_zeroes": true, 00:10:46.489 "zcopy": false, 00:10:46.489 "get_zone_info": false, 00:10:46.489 "zone_management": false, 00:10:46.489 "zone_append": false, 00:10:46.489 "compare": false, 00:10:46.489 "compare_and_write": false, 00:10:46.489 "abort": false, 00:10:46.489 "seek_hole": false, 00:10:46.489 "seek_data": false, 00:10:46.489 "copy": false, 00:10:46.489 "nvme_iov_md": false 00:10:46.489 }, 00:10:46.489 "memory_domains": [ 00:10:46.489 { 00:10:46.489 "dma_device_id": "system", 00:10:46.489 "dma_device_type": 1 00:10:46.489 }, 00:10:46.489 { 00:10:46.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.489 "dma_device_type": 2 00:10:46.489 }, 00:10:46.489 { 00:10:46.489 "dma_device_id": "system", 00:10:46.489 "dma_device_type": 1 00:10:46.489 }, 00:10:46.489 { 00:10:46.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.489 "dma_device_type": 2 00:10:46.489 }, 00:10:46.489 { 00:10:46.489 "dma_device_id": "system", 00:10:46.489 "dma_device_type": 1 00:10:46.489 }, 00:10:46.489 { 00:10:46.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.489 "dma_device_type": 2 00:10:46.489 }, 00:10:46.489 { 00:10:46.489 "dma_device_id": "system", 00:10:46.489 "dma_device_type": 1 00:10:46.489 }, 00:10:46.489 { 00:10:46.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.489 "dma_device_type": 2 00:10:46.489 } 00:10:46.489 ], 00:10:46.489 "driver_specific": { 00:10:46.489 "raid": { 00:10:46.489 "uuid": "368d3097-01f6-4e53-8359-5127d25b9da0", 00:10:46.489 "strip_size_kb": 64, 00:10:46.489 "state": "online", 00:10:46.489 "raid_level": "raid0", 00:10:46.489 "superblock": false, 00:10:46.489 "num_base_bdevs": 4, 00:10:46.489 "num_base_bdevs_discovered": 4, 00:10:46.489 "num_base_bdevs_operational": 4, 00:10:46.489 "base_bdevs_list": [ 00:10:46.489 { 00:10:46.489 "name": "NewBaseBdev", 00:10:46.490 "uuid": "72eb8513-ea6b-4aa5-b97c-91a5f4236ad2", 00:10:46.490 "is_configured": true, 00:10:46.490 "data_offset": 0, 00:10:46.490 "data_size": 65536 00:10:46.490 }, 00:10:46.490 { 00:10:46.490 "name": "BaseBdev2", 00:10:46.490 "uuid": "1c94f8e8-fad1-4ec6-a4bd-bf75673dd040", 00:10:46.490 "is_configured": true, 00:10:46.490 "data_offset": 0, 00:10:46.490 "data_size": 65536 00:10:46.490 }, 00:10:46.490 { 00:10:46.490 "name": "BaseBdev3", 00:10:46.490 "uuid": "40f47fb0-35e1-4fa1-aaa8-f53030833f94", 00:10:46.490 "is_configured": true, 00:10:46.490 "data_offset": 0, 00:10:46.490 "data_size": 65536 00:10:46.490 }, 00:10:46.490 { 00:10:46.490 "name": "BaseBdev4", 00:10:46.490 "uuid": "8680267b-dc1c-45b2-982a-ca88b7498acd", 00:10:46.490 "is_configured": true, 00:10:46.490 "data_offset": 0, 00:10:46.490 "data_size": 65536 00:10:46.490 } 00:10:46.490 ] 00:10:46.490 } 00:10:46.490 } 00:10:46.490 }' 00:10:46.490 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.490 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.490 BaseBdev2 00:10:46.490 BaseBdev3 00:10:46.490 BaseBdev4' 00:10:46.490 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.490 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.490 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.490 20:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.490 20:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.490 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.490 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.490 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.749 [2024-11-26 20:23:40.194446] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.749 [2024-11-26 20:23:40.194492] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.749 [2024-11-26 20:23:40.194600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.749 [2024-11-26 20:23:40.194704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.749 [2024-11-26 20:23:40.194742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80812 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80812 ']' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80812 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80812 00:10:46.749 killing process with pid 80812 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80812' 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80812 00:10:46.749 [2024-11-26 20:23:40.245081] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.749 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80812 00:10:47.008 [2024-11-26 20:23:40.309949] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:47.268 00:10:47.268 real 0m10.373s 00:10:47.268 user 0m17.461s 00:10:47.268 sys 0m2.213s 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.268 ************************************ 00:10:47.268 END TEST raid_state_function_test 00:10:47.268 ************************************ 00:10:47.268 20:23:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:47.268 20:23:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:47.268 20:23:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.268 20:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.268 ************************************ 00:10:47.268 START TEST raid_state_function_test_sb 00:10:47.268 ************************************ 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81467 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81467' 00:10:47.268 Process raid pid: 81467 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81467 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81467 ']' 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.268 20:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.526 [2024-11-26 20:23:40.861240] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:47.526 [2024-11-26 20:23:40.861914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.526 [2024-11-26 20:23:41.015728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.785 [2024-11-26 20:23:41.101846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.785 [2024-11-26 20:23:41.183221] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.785 [2024-11-26 20:23:41.183278] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.351 [2024-11-26 20:23:41.829615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.351 [2024-11-26 20:23:41.829697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.351 [2024-11-26 20:23:41.829715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.351 [2024-11-26 20:23:41.829730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.351 [2024-11-26 20:23:41.829741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.351 [2024-11-26 20:23:41.829758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.351 [2024-11-26 20:23:41.829769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:48.351 [2024-11-26 20:23:41.829782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.351 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.351 "name": "Existed_Raid", 00:10:48.351 "uuid": "b54dc206-6609-415f-8055-8384d9012cc8", 00:10:48.351 "strip_size_kb": 64, 00:10:48.351 "state": "configuring", 00:10:48.351 "raid_level": "raid0", 00:10:48.351 "superblock": true, 00:10:48.351 "num_base_bdevs": 4, 00:10:48.351 "num_base_bdevs_discovered": 0, 00:10:48.351 "num_base_bdevs_operational": 4, 00:10:48.351 "base_bdevs_list": [ 00:10:48.351 { 00:10:48.351 "name": "BaseBdev1", 00:10:48.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.351 "is_configured": false, 00:10:48.351 "data_offset": 0, 00:10:48.351 "data_size": 0 00:10:48.351 }, 00:10:48.351 { 00:10:48.351 "name": "BaseBdev2", 00:10:48.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.351 "is_configured": false, 00:10:48.351 "data_offset": 0, 00:10:48.351 "data_size": 0 00:10:48.351 }, 00:10:48.351 { 00:10:48.351 "name": "BaseBdev3", 00:10:48.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.351 "is_configured": false, 00:10:48.351 "data_offset": 0, 00:10:48.351 "data_size": 0 00:10:48.351 }, 00:10:48.351 { 00:10:48.351 "name": "BaseBdev4", 00:10:48.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.351 "is_configured": false, 00:10:48.351 "data_offset": 0, 00:10:48.351 "data_size": 0 00:10:48.352 } 00:10:48.352 ] 00:10:48.352 }' 00:10:48.352 20:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.352 20:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.917 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.917 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.917 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.918 [2024-11-26 20:23:42.260795] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.918 [2024-11-26 20:23:42.260858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.918 [2024-11-26 20:23:42.272891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.918 [2024-11-26 20:23:42.272956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.918 [2024-11-26 20:23:42.272968] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.918 [2024-11-26 20:23:42.272981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.918 [2024-11-26 20:23:42.272991] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.918 [2024-11-26 20:23:42.273005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.918 [2024-11-26 20:23:42.273014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:48.918 [2024-11-26 20:23:42.273027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.918 [2024-11-26 20:23:42.295456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.918 BaseBdev1 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.918 [ 00:10:48.918 { 00:10:48.918 "name": "BaseBdev1", 00:10:48.918 "aliases": [ 00:10:48.918 "c3dae31e-5c61-4467-b780-3a733b9e6ffa" 00:10:48.918 ], 00:10:48.918 "product_name": "Malloc disk", 00:10:48.918 "block_size": 512, 00:10:48.918 "num_blocks": 65536, 00:10:48.918 "uuid": "c3dae31e-5c61-4467-b780-3a733b9e6ffa", 00:10:48.918 "assigned_rate_limits": { 00:10:48.918 "rw_ios_per_sec": 0, 00:10:48.918 "rw_mbytes_per_sec": 0, 00:10:48.918 "r_mbytes_per_sec": 0, 00:10:48.918 "w_mbytes_per_sec": 0 00:10:48.918 }, 00:10:48.918 "claimed": true, 00:10:48.918 "claim_type": "exclusive_write", 00:10:48.918 "zoned": false, 00:10:48.918 "supported_io_types": { 00:10:48.918 "read": true, 00:10:48.918 "write": true, 00:10:48.918 "unmap": true, 00:10:48.918 "flush": true, 00:10:48.918 "reset": true, 00:10:48.918 "nvme_admin": false, 00:10:48.918 "nvme_io": false, 00:10:48.918 "nvme_io_md": false, 00:10:48.918 "write_zeroes": true, 00:10:48.918 "zcopy": true, 00:10:48.918 "get_zone_info": false, 00:10:48.918 "zone_management": false, 00:10:48.918 "zone_append": false, 00:10:48.918 "compare": false, 00:10:48.918 "compare_and_write": false, 00:10:48.918 "abort": true, 00:10:48.918 "seek_hole": false, 00:10:48.918 "seek_data": false, 00:10:48.918 "copy": true, 00:10:48.918 "nvme_iov_md": false 00:10:48.918 }, 00:10:48.918 "memory_domains": [ 00:10:48.918 { 00:10:48.918 "dma_device_id": "system", 00:10:48.918 "dma_device_type": 1 00:10:48.918 }, 00:10:48.918 { 00:10:48.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.918 "dma_device_type": 2 00:10:48.918 } 00:10:48.918 ], 00:10:48.918 "driver_specific": {} 00:10:48.918 } 00:10:48.918 ] 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.918 "name": "Existed_Raid", 00:10:48.918 "uuid": "eb57ca6a-6097-412b-b2a1-24568f67b8f9", 00:10:48.918 "strip_size_kb": 64, 00:10:48.918 "state": "configuring", 00:10:48.918 "raid_level": "raid0", 00:10:48.918 "superblock": true, 00:10:48.918 "num_base_bdevs": 4, 00:10:48.918 "num_base_bdevs_discovered": 1, 00:10:48.918 "num_base_bdevs_operational": 4, 00:10:48.918 "base_bdevs_list": [ 00:10:48.918 { 00:10:48.918 "name": "BaseBdev1", 00:10:48.918 "uuid": "c3dae31e-5c61-4467-b780-3a733b9e6ffa", 00:10:48.918 "is_configured": true, 00:10:48.918 "data_offset": 2048, 00:10:48.918 "data_size": 63488 00:10:48.918 }, 00:10:48.918 { 00:10:48.918 "name": "BaseBdev2", 00:10:48.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.918 "is_configured": false, 00:10:48.918 "data_offset": 0, 00:10:48.918 "data_size": 0 00:10:48.918 }, 00:10:48.918 { 00:10:48.918 "name": "BaseBdev3", 00:10:48.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.918 "is_configured": false, 00:10:48.918 "data_offset": 0, 00:10:48.918 "data_size": 0 00:10:48.918 }, 00:10:48.918 { 00:10:48.918 "name": "BaseBdev4", 00:10:48.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.918 "is_configured": false, 00:10:48.918 "data_offset": 0, 00:10:48.918 "data_size": 0 00:10:48.918 } 00:10:48.918 ] 00:10:48.918 }' 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.918 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.487 [2024-11-26 20:23:42.774817] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.487 [2024-11-26 20:23:42.774898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.487 [2024-11-26 20:23:42.786902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.487 [2024-11-26 20:23:42.789374] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.487 [2024-11-26 20:23:42.789447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.487 [2024-11-26 20:23:42.789462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.487 [2024-11-26 20:23:42.789475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.487 [2024-11-26 20:23:42.789485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.487 [2024-11-26 20:23:42.789498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.487 "name": "Existed_Raid", 00:10:49.487 "uuid": "6182da27-6f0a-4276-8b0f-86caa109078c", 00:10:49.487 "strip_size_kb": 64, 00:10:49.487 "state": "configuring", 00:10:49.487 "raid_level": "raid0", 00:10:49.487 "superblock": true, 00:10:49.487 "num_base_bdevs": 4, 00:10:49.487 "num_base_bdevs_discovered": 1, 00:10:49.487 "num_base_bdevs_operational": 4, 00:10:49.487 "base_bdevs_list": [ 00:10:49.487 { 00:10:49.487 "name": "BaseBdev1", 00:10:49.487 "uuid": "c3dae31e-5c61-4467-b780-3a733b9e6ffa", 00:10:49.487 "is_configured": true, 00:10:49.487 "data_offset": 2048, 00:10:49.487 "data_size": 63488 00:10:49.487 }, 00:10:49.487 { 00:10:49.487 "name": "BaseBdev2", 00:10:49.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.487 "is_configured": false, 00:10:49.487 "data_offset": 0, 00:10:49.487 "data_size": 0 00:10:49.487 }, 00:10:49.487 { 00:10:49.487 "name": "BaseBdev3", 00:10:49.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.487 "is_configured": false, 00:10:49.487 "data_offset": 0, 00:10:49.487 "data_size": 0 00:10:49.487 }, 00:10:49.487 { 00:10:49.487 "name": "BaseBdev4", 00:10:49.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.487 "is_configured": false, 00:10:49.487 "data_offset": 0, 00:10:49.487 "data_size": 0 00:10:49.487 } 00:10:49.487 ] 00:10:49.487 }' 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.487 20:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.747 [2024-11-26 20:23:43.259334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.747 BaseBdev2 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.747 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.747 [ 00:10:49.747 { 00:10:49.747 "name": "BaseBdev2", 00:10:49.747 "aliases": [ 00:10:49.747 "14d5e8d5-76d5-4c33-9a00-36c9b7b68c71" 00:10:49.747 ], 00:10:49.747 "product_name": "Malloc disk", 00:10:49.747 "block_size": 512, 00:10:49.747 "num_blocks": 65536, 00:10:49.747 "uuid": "14d5e8d5-76d5-4c33-9a00-36c9b7b68c71", 00:10:49.747 "assigned_rate_limits": { 00:10:49.747 "rw_ios_per_sec": 0, 00:10:49.747 "rw_mbytes_per_sec": 0, 00:10:49.747 "r_mbytes_per_sec": 0, 00:10:49.747 "w_mbytes_per_sec": 0 00:10:49.747 }, 00:10:49.747 "claimed": true, 00:10:49.747 "claim_type": "exclusive_write", 00:10:49.747 "zoned": false, 00:10:49.747 "supported_io_types": { 00:10:49.747 "read": true, 00:10:49.747 "write": true, 00:10:49.747 "unmap": true, 00:10:49.747 "flush": true, 00:10:49.747 "reset": true, 00:10:49.747 "nvme_admin": false, 00:10:49.747 "nvme_io": false, 00:10:49.747 "nvme_io_md": false, 00:10:49.747 "write_zeroes": true, 00:10:49.747 "zcopy": true, 00:10:49.747 "get_zone_info": false, 00:10:49.747 "zone_management": false, 00:10:49.747 "zone_append": false, 00:10:50.006 "compare": false, 00:10:50.006 "compare_and_write": false, 00:10:50.006 "abort": true, 00:10:50.006 "seek_hole": false, 00:10:50.006 "seek_data": false, 00:10:50.006 "copy": true, 00:10:50.006 "nvme_iov_md": false 00:10:50.006 }, 00:10:50.006 "memory_domains": [ 00:10:50.006 { 00:10:50.006 "dma_device_id": "system", 00:10:50.006 "dma_device_type": 1 00:10:50.006 }, 00:10:50.006 { 00:10:50.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.006 "dma_device_type": 2 00:10:50.006 } 00:10:50.006 ], 00:10:50.006 "driver_specific": {} 00:10:50.006 } 00:10:50.006 ] 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.006 "name": "Existed_Raid", 00:10:50.006 "uuid": "6182da27-6f0a-4276-8b0f-86caa109078c", 00:10:50.006 "strip_size_kb": 64, 00:10:50.006 "state": "configuring", 00:10:50.006 "raid_level": "raid0", 00:10:50.006 "superblock": true, 00:10:50.006 "num_base_bdevs": 4, 00:10:50.006 "num_base_bdevs_discovered": 2, 00:10:50.006 "num_base_bdevs_operational": 4, 00:10:50.006 "base_bdevs_list": [ 00:10:50.006 { 00:10:50.006 "name": "BaseBdev1", 00:10:50.006 "uuid": "c3dae31e-5c61-4467-b780-3a733b9e6ffa", 00:10:50.006 "is_configured": true, 00:10:50.006 "data_offset": 2048, 00:10:50.006 "data_size": 63488 00:10:50.006 }, 00:10:50.006 { 00:10:50.006 "name": "BaseBdev2", 00:10:50.006 "uuid": "14d5e8d5-76d5-4c33-9a00-36c9b7b68c71", 00:10:50.006 "is_configured": true, 00:10:50.006 "data_offset": 2048, 00:10:50.006 "data_size": 63488 00:10:50.006 }, 00:10:50.006 { 00:10:50.006 "name": "BaseBdev3", 00:10:50.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.006 "is_configured": false, 00:10:50.006 "data_offset": 0, 00:10:50.006 "data_size": 0 00:10:50.006 }, 00:10:50.006 { 00:10:50.006 "name": "BaseBdev4", 00:10:50.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.006 "is_configured": false, 00:10:50.006 "data_offset": 0, 00:10:50.006 "data_size": 0 00:10:50.006 } 00:10:50.006 ] 00:10:50.006 }' 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.006 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.266 [2024-11-26 20:23:43.794558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.266 BaseBdev3 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.266 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.528 [ 00:10:50.528 { 00:10:50.528 "name": "BaseBdev3", 00:10:50.528 "aliases": [ 00:10:50.528 "b505b657-ae88-4b8a-ad5d-0542e347258e" 00:10:50.528 ], 00:10:50.528 "product_name": "Malloc disk", 00:10:50.528 "block_size": 512, 00:10:50.528 "num_blocks": 65536, 00:10:50.528 "uuid": "b505b657-ae88-4b8a-ad5d-0542e347258e", 00:10:50.528 "assigned_rate_limits": { 00:10:50.528 "rw_ios_per_sec": 0, 00:10:50.528 "rw_mbytes_per_sec": 0, 00:10:50.528 "r_mbytes_per_sec": 0, 00:10:50.528 "w_mbytes_per_sec": 0 00:10:50.528 }, 00:10:50.528 "claimed": true, 00:10:50.528 "claim_type": "exclusive_write", 00:10:50.528 "zoned": false, 00:10:50.528 "supported_io_types": { 00:10:50.528 "read": true, 00:10:50.528 "write": true, 00:10:50.528 "unmap": true, 00:10:50.528 "flush": true, 00:10:50.528 "reset": true, 00:10:50.528 "nvme_admin": false, 00:10:50.528 "nvme_io": false, 00:10:50.528 "nvme_io_md": false, 00:10:50.528 "write_zeroes": true, 00:10:50.528 "zcopy": true, 00:10:50.528 "get_zone_info": false, 00:10:50.528 "zone_management": false, 00:10:50.528 "zone_append": false, 00:10:50.528 "compare": false, 00:10:50.528 "compare_and_write": false, 00:10:50.528 "abort": true, 00:10:50.528 "seek_hole": false, 00:10:50.528 "seek_data": false, 00:10:50.528 "copy": true, 00:10:50.528 "nvme_iov_md": false 00:10:50.528 }, 00:10:50.528 "memory_domains": [ 00:10:50.528 { 00:10:50.528 "dma_device_id": "system", 00:10:50.528 "dma_device_type": 1 00:10:50.528 }, 00:10:50.528 { 00:10:50.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.528 "dma_device_type": 2 00:10:50.528 } 00:10:50.528 ], 00:10:50.528 "driver_specific": {} 00:10:50.528 } 00:10:50.528 ] 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.528 "name": "Existed_Raid", 00:10:50.528 "uuid": "6182da27-6f0a-4276-8b0f-86caa109078c", 00:10:50.528 "strip_size_kb": 64, 00:10:50.528 "state": "configuring", 00:10:50.528 "raid_level": "raid0", 00:10:50.528 "superblock": true, 00:10:50.528 "num_base_bdevs": 4, 00:10:50.528 "num_base_bdevs_discovered": 3, 00:10:50.528 "num_base_bdevs_operational": 4, 00:10:50.528 "base_bdevs_list": [ 00:10:50.528 { 00:10:50.528 "name": "BaseBdev1", 00:10:50.528 "uuid": "c3dae31e-5c61-4467-b780-3a733b9e6ffa", 00:10:50.528 "is_configured": true, 00:10:50.528 "data_offset": 2048, 00:10:50.528 "data_size": 63488 00:10:50.528 }, 00:10:50.528 { 00:10:50.528 "name": "BaseBdev2", 00:10:50.528 "uuid": "14d5e8d5-76d5-4c33-9a00-36c9b7b68c71", 00:10:50.528 "is_configured": true, 00:10:50.528 "data_offset": 2048, 00:10:50.528 "data_size": 63488 00:10:50.528 }, 00:10:50.528 { 00:10:50.528 "name": "BaseBdev3", 00:10:50.528 "uuid": "b505b657-ae88-4b8a-ad5d-0542e347258e", 00:10:50.528 "is_configured": true, 00:10:50.528 "data_offset": 2048, 00:10:50.528 "data_size": 63488 00:10:50.528 }, 00:10:50.528 { 00:10:50.528 "name": "BaseBdev4", 00:10:50.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.528 "is_configured": false, 00:10:50.528 "data_offset": 0, 00:10:50.528 "data_size": 0 00:10:50.528 } 00:10:50.528 ] 00:10:50.528 }' 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.528 20:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.789 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:50.789 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.789 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.789 [2024-11-26 20:23:44.319311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:50.789 [2024-11-26 20:23:44.319724] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:50.789 [2024-11-26 20:23:44.319807] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:50.789 [2024-11-26 20:23:44.320204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:50.789 BaseBdev4 00:10:50.790 [2024-11-26 20:23:44.320410] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:50.790 [2024-11-26 20:23:44.320455] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:50.790 [2024-11-26 20:23:44.320676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.790 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.049 [ 00:10:51.049 { 00:10:51.049 "name": "BaseBdev4", 00:10:51.049 "aliases": [ 00:10:51.049 "8808ed68-4a3a-4978-896f-944b82c3cb7c" 00:10:51.049 ], 00:10:51.049 "product_name": "Malloc disk", 00:10:51.049 "block_size": 512, 00:10:51.049 "num_blocks": 65536, 00:10:51.049 "uuid": "8808ed68-4a3a-4978-896f-944b82c3cb7c", 00:10:51.049 "assigned_rate_limits": { 00:10:51.049 "rw_ios_per_sec": 0, 00:10:51.049 "rw_mbytes_per_sec": 0, 00:10:51.049 "r_mbytes_per_sec": 0, 00:10:51.049 "w_mbytes_per_sec": 0 00:10:51.049 }, 00:10:51.049 "claimed": true, 00:10:51.049 "claim_type": "exclusive_write", 00:10:51.049 "zoned": false, 00:10:51.049 "supported_io_types": { 00:10:51.049 "read": true, 00:10:51.049 "write": true, 00:10:51.049 "unmap": true, 00:10:51.049 "flush": true, 00:10:51.049 "reset": true, 00:10:51.049 "nvme_admin": false, 00:10:51.049 "nvme_io": false, 00:10:51.049 "nvme_io_md": false, 00:10:51.049 "write_zeroes": true, 00:10:51.049 "zcopy": true, 00:10:51.049 "get_zone_info": false, 00:10:51.049 "zone_management": false, 00:10:51.049 "zone_append": false, 00:10:51.049 "compare": false, 00:10:51.049 "compare_and_write": false, 00:10:51.049 "abort": true, 00:10:51.049 "seek_hole": false, 00:10:51.049 "seek_data": false, 00:10:51.049 "copy": true, 00:10:51.049 "nvme_iov_md": false 00:10:51.049 }, 00:10:51.049 "memory_domains": [ 00:10:51.049 { 00:10:51.049 "dma_device_id": "system", 00:10:51.049 "dma_device_type": 1 00:10:51.049 }, 00:10:51.049 { 00:10:51.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.049 "dma_device_type": 2 00:10:51.049 } 00:10:51.049 ], 00:10:51.049 "driver_specific": {} 00:10:51.049 } 00:10:51.049 ] 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.049 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.050 "name": "Existed_Raid", 00:10:51.050 "uuid": "6182da27-6f0a-4276-8b0f-86caa109078c", 00:10:51.050 "strip_size_kb": 64, 00:10:51.050 "state": "online", 00:10:51.050 "raid_level": "raid0", 00:10:51.050 "superblock": true, 00:10:51.050 "num_base_bdevs": 4, 00:10:51.050 "num_base_bdevs_discovered": 4, 00:10:51.050 "num_base_bdevs_operational": 4, 00:10:51.050 "base_bdevs_list": [ 00:10:51.050 { 00:10:51.050 "name": "BaseBdev1", 00:10:51.050 "uuid": "c3dae31e-5c61-4467-b780-3a733b9e6ffa", 00:10:51.050 "is_configured": true, 00:10:51.050 "data_offset": 2048, 00:10:51.050 "data_size": 63488 00:10:51.050 }, 00:10:51.050 { 00:10:51.050 "name": "BaseBdev2", 00:10:51.050 "uuid": "14d5e8d5-76d5-4c33-9a00-36c9b7b68c71", 00:10:51.050 "is_configured": true, 00:10:51.050 "data_offset": 2048, 00:10:51.050 "data_size": 63488 00:10:51.050 }, 00:10:51.050 { 00:10:51.050 "name": "BaseBdev3", 00:10:51.050 "uuid": "b505b657-ae88-4b8a-ad5d-0542e347258e", 00:10:51.050 "is_configured": true, 00:10:51.050 "data_offset": 2048, 00:10:51.050 "data_size": 63488 00:10:51.050 }, 00:10:51.050 { 00:10:51.050 "name": "BaseBdev4", 00:10:51.050 "uuid": "8808ed68-4a3a-4978-896f-944b82c3cb7c", 00:10:51.050 "is_configured": true, 00:10:51.050 "data_offset": 2048, 00:10:51.050 "data_size": 63488 00:10:51.050 } 00:10:51.050 ] 00:10:51.050 }' 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.050 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.311 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.311 [2024-11-26 20:23:44.854998] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.571 20:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.571 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.571 "name": "Existed_Raid", 00:10:51.571 "aliases": [ 00:10:51.571 "6182da27-6f0a-4276-8b0f-86caa109078c" 00:10:51.571 ], 00:10:51.571 "product_name": "Raid Volume", 00:10:51.571 "block_size": 512, 00:10:51.571 "num_blocks": 253952, 00:10:51.571 "uuid": "6182da27-6f0a-4276-8b0f-86caa109078c", 00:10:51.571 "assigned_rate_limits": { 00:10:51.571 "rw_ios_per_sec": 0, 00:10:51.571 "rw_mbytes_per_sec": 0, 00:10:51.571 "r_mbytes_per_sec": 0, 00:10:51.571 "w_mbytes_per_sec": 0 00:10:51.571 }, 00:10:51.571 "claimed": false, 00:10:51.571 "zoned": false, 00:10:51.571 "supported_io_types": { 00:10:51.571 "read": true, 00:10:51.571 "write": true, 00:10:51.571 "unmap": true, 00:10:51.571 "flush": true, 00:10:51.571 "reset": true, 00:10:51.571 "nvme_admin": false, 00:10:51.571 "nvme_io": false, 00:10:51.571 "nvme_io_md": false, 00:10:51.571 "write_zeroes": true, 00:10:51.571 "zcopy": false, 00:10:51.571 "get_zone_info": false, 00:10:51.571 "zone_management": false, 00:10:51.571 "zone_append": false, 00:10:51.571 "compare": false, 00:10:51.571 "compare_and_write": false, 00:10:51.571 "abort": false, 00:10:51.571 "seek_hole": false, 00:10:51.571 "seek_data": false, 00:10:51.571 "copy": false, 00:10:51.571 "nvme_iov_md": false 00:10:51.571 }, 00:10:51.571 "memory_domains": [ 00:10:51.571 { 00:10:51.571 "dma_device_id": "system", 00:10:51.571 "dma_device_type": 1 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.571 "dma_device_type": 2 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "dma_device_id": "system", 00:10:51.571 "dma_device_type": 1 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.571 "dma_device_type": 2 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "dma_device_id": "system", 00:10:51.571 "dma_device_type": 1 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.571 "dma_device_type": 2 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "dma_device_id": "system", 00:10:51.571 "dma_device_type": 1 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.571 "dma_device_type": 2 00:10:51.571 } 00:10:51.571 ], 00:10:51.571 "driver_specific": { 00:10:51.571 "raid": { 00:10:51.571 "uuid": "6182da27-6f0a-4276-8b0f-86caa109078c", 00:10:51.571 "strip_size_kb": 64, 00:10:51.571 "state": "online", 00:10:51.571 "raid_level": "raid0", 00:10:51.571 "superblock": true, 00:10:51.571 "num_base_bdevs": 4, 00:10:51.571 "num_base_bdevs_discovered": 4, 00:10:51.571 "num_base_bdevs_operational": 4, 00:10:51.571 "base_bdevs_list": [ 00:10:51.571 { 00:10:51.571 "name": "BaseBdev1", 00:10:51.571 "uuid": "c3dae31e-5c61-4467-b780-3a733b9e6ffa", 00:10:51.571 "is_configured": true, 00:10:51.571 "data_offset": 2048, 00:10:51.571 "data_size": 63488 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "name": "BaseBdev2", 00:10:51.571 "uuid": "14d5e8d5-76d5-4c33-9a00-36c9b7b68c71", 00:10:51.571 "is_configured": true, 00:10:51.571 "data_offset": 2048, 00:10:51.571 "data_size": 63488 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "name": "BaseBdev3", 00:10:51.571 "uuid": "b505b657-ae88-4b8a-ad5d-0542e347258e", 00:10:51.571 "is_configured": true, 00:10:51.571 "data_offset": 2048, 00:10:51.571 "data_size": 63488 00:10:51.571 }, 00:10:51.571 { 00:10:51.571 "name": "BaseBdev4", 00:10:51.571 "uuid": "8808ed68-4a3a-4978-896f-944b82c3cb7c", 00:10:51.571 "is_configured": true, 00:10:51.571 "data_offset": 2048, 00:10:51.571 "data_size": 63488 00:10:51.571 } 00:10:51.571 ] 00:10:51.571 } 00:10:51.571 } 00:10:51.571 }' 00:10:51.571 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.572 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:51.572 BaseBdev2 00:10:51.572 BaseBdev3 00:10:51.572 BaseBdev4' 00:10:51.572 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.572 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.572 20:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.572 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.832 [2024-11-26 20:23:45.210006] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.832 [2024-11-26 20:23:45.210052] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.832 [2024-11-26 20:23:45.210137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.832 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.832 "name": "Existed_Raid", 00:10:51.832 "uuid": "6182da27-6f0a-4276-8b0f-86caa109078c", 00:10:51.832 "strip_size_kb": 64, 00:10:51.832 "state": "offline", 00:10:51.832 "raid_level": "raid0", 00:10:51.832 "superblock": true, 00:10:51.832 "num_base_bdevs": 4, 00:10:51.832 "num_base_bdevs_discovered": 3, 00:10:51.832 "num_base_bdevs_operational": 3, 00:10:51.832 "base_bdevs_list": [ 00:10:51.832 { 00:10:51.832 "name": null, 00:10:51.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.832 "is_configured": false, 00:10:51.832 "data_offset": 0, 00:10:51.832 "data_size": 63488 00:10:51.832 }, 00:10:51.832 { 00:10:51.832 "name": "BaseBdev2", 00:10:51.833 "uuid": "14d5e8d5-76d5-4c33-9a00-36c9b7b68c71", 00:10:51.833 "is_configured": true, 00:10:51.833 "data_offset": 2048, 00:10:51.833 "data_size": 63488 00:10:51.833 }, 00:10:51.833 { 00:10:51.833 "name": "BaseBdev3", 00:10:51.833 "uuid": "b505b657-ae88-4b8a-ad5d-0542e347258e", 00:10:51.833 "is_configured": true, 00:10:51.833 "data_offset": 2048, 00:10:51.833 "data_size": 63488 00:10:51.833 }, 00:10:51.833 { 00:10:51.833 "name": "BaseBdev4", 00:10:51.833 "uuid": "8808ed68-4a3a-4978-896f-944b82c3cb7c", 00:10:51.833 "is_configured": true, 00:10:51.833 "data_offset": 2048, 00:10:51.833 "data_size": 63488 00:10:51.833 } 00:10:51.833 ] 00:10:51.833 }' 00:10:51.833 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.833 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.404 [2024-11-26 20:23:45.730622] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.404 [2024-11-26 20:23:45.812504] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.404 [2024-11-26 20:23:45.881002] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:52.404 [2024-11-26 20:23:45.881170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:52.404 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.664 BaseBdev2 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.664 20:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.664 [ 00:10:52.664 { 00:10:52.664 "name": "BaseBdev2", 00:10:52.664 "aliases": [ 00:10:52.664 "62c2d292-ed09-47f1-a91e-36c9a20b73a9" 00:10:52.664 ], 00:10:52.664 "product_name": "Malloc disk", 00:10:52.664 "block_size": 512, 00:10:52.664 "num_blocks": 65536, 00:10:52.664 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:52.664 "assigned_rate_limits": { 00:10:52.664 "rw_ios_per_sec": 0, 00:10:52.664 "rw_mbytes_per_sec": 0, 00:10:52.664 "r_mbytes_per_sec": 0, 00:10:52.664 "w_mbytes_per_sec": 0 00:10:52.664 }, 00:10:52.664 "claimed": false, 00:10:52.664 "zoned": false, 00:10:52.664 "supported_io_types": { 00:10:52.664 "read": true, 00:10:52.664 "write": true, 00:10:52.664 "unmap": true, 00:10:52.664 "flush": true, 00:10:52.664 "reset": true, 00:10:52.664 "nvme_admin": false, 00:10:52.664 "nvme_io": false, 00:10:52.664 "nvme_io_md": false, 00:10:52.664 "write_zeroes": true, 00:10:52.664 "zcopy": true, 00:10:52.664 "get_zone_info": false, 00:10:52.664 "zone_management": false, 00:10:52.664 "zone_append": false, 00:10:52.664 "compare": false, 00:10:52.664 "compare_and_write": false, 00:10:52.664 "abort": true, 00:10:52.664 "seek_hole": false, 00:10:52.664 "seek_data": false, 00:10:52.664 "copy": true, 00:10:52.664 "nvme_iov_md": false 00:10:52.664 }, 00:10:52.664 "memory_domains": [ 00:10:52.664 { 00:10:52.664 "dma_device_id": "system", 00:10:52.664 "dma_device_type": 1 00:10:52.664 }, 00:10:52.664 { 00:10:52.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.664 "dma_device_type": 2 00:10:52.664 } 00:10:52.664 ], 00:10:52.664 "driver_specific": {} 00:10:52.664 } 00:10:52.664 ] 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.664 BaseBdev3 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.664 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.664 [ 00:10:52.664 { 00:10:52.664 "name": "BaseBdev3", 00:10:52.664 "aliases": [ 00:10:52.664 "4a5d6bae-ac72-49fd-97c3-8343df849f6b" 00:10:52.664 ], 00:10:52.664 "product_name": "Malloc disk", 00:10:52.664 "block_size": 512, 00:10:52.664 "num_blocks": 65536, 00:10:52.664 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:52.664 "assigned_rate_limits": { 00:10:52.664 "rw_ios_per_sec": 0, 00:10:52.664 "rw_mbytes_per_sec": 0, 00:10:52.664 "r_mbytes_per_sec": 0, 00:10:52.664 "w_mbytes_per_sec": 0 00:10:52.664 }, 00:10:52.664 "claimed": false, 00:10:52.664 "zoned": false, 00:10:52.664 "supported_io_types": { 00:10:52.664 "read": true, 00:10:52.664 "write": true, 00:10:52.664 "unmap": true, 00:10:52.664 "flush": true, 00:10:52.664 "reset": true, 00:10:52.664 "nvme_admin": false, 00:10:52.664 "nvme_io": false, 00:10:52.664 "nvme_io_md": false, 00:10:52.664 "write_zeroes": true, 00:10:52.664 "zcopy": true, 00:10:52.665 "get_zone_info": false, 00:10:52.665 "zone_management": false, 00:10:52.665 "zone_append": false, 00:10:52.665 "compare": false, 00:10:52.665 "compare_and_write": false, 00:10:52.665 "abort": true, 00:10:52.665 "seek_hole": false, 00:10:52.665 "seek_data": false, 00:10:52.665 "copy": true, 00:10:52.665 "nvme_iov_md": false 00:10:52.665 }, 00:10:52.665 "memory_domains": [ 00:10:52.665 { 00:10:52.665 "dma_device_id": "system", 00:10:52.665 "dma_device_type": 1 00:10:52.665 }, 00:10:52.665 { 00:10:52.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.665 "dma_device_type": 2 00:10:52.665 } 00:10:52.665 ], 00:10:52.665 "driver_specific": {} 00:10:52.665 } 00:10:52.665 ] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.665 BaseBdev4 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.665 [ 00:10:52.665 { 00:10:52.665 "name": "BaseBdev4", 00:10:52.665 "aliases": [ 00:10:52.665 "4d258610-33b4-4205-9d6e-738cda9ce0db" 00:10:52.665 ], 00:10:52.665 "product_name": "Malloc disk", 00:10:52.665 "block_size": 512, 00:10:52.665 "num_blocks": 65536, 00:10:52.665 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:52.665 "assigned_rate_limits": { 00:10:52.665 "rw_ios_per_sec": 0, 00:10:52.665 "rw_mbytes_per_sec": 0, 00:10:52.665 "r_mbytes_per_sec": 0, 00:10:52.665 "w_mbytes_per_sec": 0 00:10:52.665 }, 00:10:52.665 "claimed": false, 00:10:52.665 "zoned": false, 00:10:52.665 "supported_io_types": { 00:10:52.665 "read": true, 00:10:52.665 "write": true, 00:10:52.665 "unmap": true, 00:10:52.665 "flush": true, 00:10:52.665 "reset": true, 00:10:52.665 "nvme_admin": false, 00:10:52.665 "nvme_io": false, 00:10:52.665 "nvme_io_md": false, 00:10:52.665 "write_zeroes": true, 00:10:52.665 "zcopy": true, 00:10:52.665 "get_zone_info": false, 00:10:52.665 "zone_management": false, 00:10:52.665 "zone_append": false, 00:10:52.665 "compare": false, 00:10:52.665 "compare_and_write": false, 00:10:52.665 "abort": true, 00:10:52.665 "seek_hole": false, 00:10:52.665 "seek_data": false, 00:10:52.665 "copy": true, 00:10:52.665 "nvme_iov_md": false 00:10:52.665 }, 00:10:52.665 "memory_domains": [ 00:10:52.665 { 00:10:52.665 "dma_device_id": "system", 00:10:52.665 "dma_device_type": 1 00:10:52.665 }, 00:10:52.665 { 00:10:52.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.665 "dma_device_type": 2 00:10:52.665 } 00:10:52.665 ], 00:10:52.665 "driver_specific": {} 00:10:52.665 } 00:10:52.665 ] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.665 [2024-11-26 20:23:46.142678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.665 [2024-11-26 20:23:46.142749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.665 [2024-11-26 20:23:46.142786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.665 [2024-11-26 20:23:46.145161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.665 [2024-11-26 20:23:46.145244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.665 "name": "Existed_Raid", 00:10:52.665 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:52.665 "strip_size_kb": 64, 00:10:52.665 "state": "configuring", 00:10:52.665 "raid_level": "raid0", 00:10:52.665 "superblock": true, 00:10:52.665 "num_base_bdevs": 4, 00:10:52.665 "num_base_bdevs_discovered": 3, 00:10:52.665 "num_base_bdevs_operational": 4, 00:10:52.665 "base_bdevs_list": [ 00:10:52.665 { 00:10:52.665 "name": "BaseBdev1", 00:10:52.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.665 "is_configured": false, 00:10:52.665 "data_offset": 0, 00:10:52.665 "data_size": 0 00:10:52.665 }, 00:10:52.665 { 00:10:52.665 "name": "BaseBdev2", 00:10:52.665 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:52.665 "is_configured": true, 00:10:52.665 "data_offset": 2048, 00:10:52.665 "data_size": 63488 00:10:52.665 }, 00:10:52.665 { 00:10:52.665 "name": "BaseBdev3", 00:10:52.665 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:52.665 "is_configured": true, 00:10:52.665 "data_offset": 2048, 00:10:52.665 "data_size": 63488 00:10:52.665 }, 00:10:52.665 { 00:10:52.665 "name": "BaseBdev4", 00:10:52.665 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:52.665 "is_configured": true, 00:10:52.665 "data_offset": 2048, 00:10:52.665 "data_size": 63488 00:10:52.665 } 00:10:52.665 ] 00:10:52.665 }' 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.665 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.234 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.235 [2024-11-26 20:23:46.649787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.235 "name": "Existed_Raid", 00:10:53.235 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:53.235 "strip_size_kb": 64, 00:10:53.235 "state": "configuring", 00:10:53.235 "raid_level": "raid0", 00:10:53.235 "superblock": true, 00:10:53.235 "num_base_bdevs": 4, 00:10:53.235 "num_base_bdevs_discovered": 2, 00:10:53.235 "num_base_bdevs_operational": 4, 00:10:53.235 "base_bdevs_list": [ 00:10:53.235 { 00:10:53.235 "name": "BaseBdev1", 00:10:53.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.235 "is_configured": false, 00:10:53.235 "data_offset": 0, 00:10:53.235 "data_size": 0 00:10:53.235 }, 00:10:53.235 { 00:10:53.235 "name": null, 00:10:53.235 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:53.235 "is_configured": false, 00:10:53.235 "data_offset": 0, 00:10:53.235 "data_size": 63488 00:10:53.235 }, 00:10:53.235 { 00:10:53.235 "name": "BaseBdev3", 00:10:53.235 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:53.235 "is_configured": true, 00:10:53.235 "data_offset": 2048, 00:10:53.235 "data_size": 63488 00:10:53.235 }, 00:10:53.235 { 00:10:53.235 "name": "BaseBdev4", 00:10:53.235 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:53.235 "is_configured": true, 00:10:53.235 "data_offset": 2048, 00:10:53.235 "data_size": 63488 00:10:53.235 } 00:10:53.235 ] 00:10:53.235 }' 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.235 20:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 [2024-11-26 20:23:47.185193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.802 BaseBdev1 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.802 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.802 [ 00:10:53.802 { 00:10:53.802 "name": "BaseBdev1", 00:10:53.802 "aliases": [ 00:10:53.802 "df85f377-7ccd-420b-957c-0070079c53ba" 00:10:53.802 ], 00:10:53.803 "product_name": "Malloc disk", 00:10:53.803 "block_size": 512, 00:10:53.803 "num_blocks": 65536, 00:10:53.803 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:53.803 "assigned_rate_limits": { 00:10:53.803 "rw_ios_per_sec": 0, 00:10:53.803 "rw_mbytes_per_sec": 0, 00:10:53.803 "r_mbytes_per_sec": 0, 00:10:53.803 "w_mbytes_per_sec": 0 00:10:53.803 }, 00:10:53.803 "claimed": true, 00:10:53.803 "claim_type": "exclusive_write", 00:10:53.803 "zoned": false, 00:10:53.803 "supported_io_types": { 00:10:53.803 "read": true, 00:10:53.803 "write": true, 00:10:53.803 "unmap": true, 00:10:53.803 "flush": true, 00:10:53.803 "reset": true, 00:10:53.803 "nvme_admin": false, 00:10:53.803 "nvme_io": false, 00:10:53.803 "nvme_io_md": false, 00:10:53.803 "write_zeroes": true, 00:10:53.803 "zcopy": true, 00:10:53.803 "get_zone_info": false, 00:10:53.803 "zone_management": false, 00:10:53.803 "zone_append": false, 00:10:53.803 "compare": false, 00:10:53.803 "compare_and_write": false, 00:10:53.803 "abort": true, 00:10:53.803 "seek_hole": false, 00:10:53.803 "seek_data": false, 00:10:53.803 "copy": true, 00:10:53.803 "nvme_iov_md": false 00:10:53.803 }, 00:10:53.803 "memory_domains": [ 00:10:53.803 { 00:10:53.803 "dma_device_id": "system", 00:10:53.803 "dma_device_type": 1 00:10:53.803 }, 00:10:53.803 { 00:10:53.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.803 "dma_device_type": 2 00:10:53.803 } 00:10:53.803 ], 00:10:53.803 "driver_specific": {} 00:10:53.803 } 00:10:53.803 ] 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.803 "name": "Existed_Raid", 00:10:53.803 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:53.803 "strip_size_kb": 64, 00:10:53.803 "state": "configuring", 00:10:53.803 "raid_level": "raid0", 00:10:53.803 "superblock": true, 00:10:53.803 "num_base_bdevs": 4, 00:10:53.803 "num_base_bdevs_discovered": 3, 00:10:53.803 "num_base_bdevs_operational": 4, 00:10:53.803 "base_bdevs_list": [ 00:10:53.803 { 00:10:53.803 "name": "BaseBdev1", 00:10:53.803 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:53.803 "is_configured": true, 00:10:53.803 "data_offset": 2048, 00:10:53.803 "data_size": 63488 00:10:53.803 }, 00:10:53.803 { 00:10:53.803 "name": null, 00:10:53.803 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:53.803 "is_configured": false, 00:10:53.803 "data_offset": 0, 00:10:53.803 "data_size": 63488 00:10:53.803 }, 00:10:53.803 { 00:10:53.803 "name": "BaseBdev3", 00:10:53.803 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:53.803 "is_configured": true, 00:10:53.803 "data_offset": 2048, 00:10:53.803 "data_size": 63488 00:10:53.803 }, 00:10:53.803 { 00:10:53.803 "name": "BaseBdev4", 00:10:53.803 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:53.803 "is_configured": true, 00:10:53.803 "data_offset": 2048, 00:10:53.803 "data_size": 63488 00:10:53.803 } 00:10:53.803 ] 00:10:53.803 }' 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.803 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.378 [2024-11-26 20:23:47.716716] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.378 "name": "Existed_Raid", 00:10:54.378 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:54.378 "strip_size_kb": 64, 00:10:54.378 "state": "configuring", 00:10:54.378 "raid_level": "raid0", 00:10:54.378 "superblock": true, 00:10:54.378 "num_base_bdevs": 4, 00:10:54.378 "num_base_bdevs_discovered": 2, 00:10:54.378 "num_base_bdevs_operational": 4, 00:10:54.378 "base_bdevs_list": [ 00:10:54.378 { 00:10:54.378 "name": "BaseBdev1", 00:10:54.378 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:54.378 "is_configured": true, 00:10:54.378 "data_offset": 2048, 00:10:54.378 "data_size": 63488 00:10:54.378 }, 00:10:54.378 { 00:10:54.378 "name": null, 00:10:54.378 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:54.378 "is_configured": false, 00:10:54.378 "data_offset": 0, 00:10:54.378 "data_size": 63488 00:10:54.378 }, 00:10:54.378 { 00:10:54.378 "name": null, 00:10:54.378 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:54.378 "is_configured": false, 00:10:54.378 "data_offset": 0, 00:10:54.378 "data_size": 63488 00:10:54.378 }, 00:10:54.378 { 00:10:54.378 "name": "BaseBdev4", 00:10:54.378 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:54.378 "is_configured": true, 00:10:54.378 "data_offset": 2048, 00:10:54.378 "data_size": 63488 00:10:54.378 } 00:10:54.378 ] 00:10:54.378 }' 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.378 20:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.647 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.647 [2024-11-26 20:23:48.192034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.907 "name": "Existed_Raid", 00:10:54.907 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:54.907 "strip_size_kb": 64, 00:10:54.907 "state": "configuring", 00:10:54.907 "raid_level": "raid0", 00:10:54.907 "superblock": true, 00:10:54.907 "num_base_bdevs": 4, 00:10:54.907 "num_base_bdevs_discovered": 3, 00:10:54.907 "num_base_bdevs_operational": 4, 00:10:54.907 "base_bdevs_list": [ 00:10:54.907 { 00:10:54.907 "name": "BaseBdev1", 00:10:54.907 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:54.907 "is_configured": true, 00:10:54.907 "data_offset": 2048, 00:10:54.907 "data_size": 63488 00:10:54.907 }, 00:10:54.907 { 00:10:54.907 "name": null, 00:10:54.907 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:54.907 "is_configured": false, 00:10:54.907 "data_offset": 0, 00:10:54.907 "data_size": 63488 00:10:54.907 }, 00:10:54.907 { 00:10:54.907 "name": "BaseBdev3", 00:10:54.907 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:54.907 "is_configured": true, 00:10:54.907 "data_offset": 2048, 00:10:54.907 "data_size": 63488 00:10:54.907 }, 00:10:54.907 { 00:10:54.907 "name": "BaseBdev4", 00:10:54.907 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:54.907 "is_configured": true, 00:10:54.907 "data_offset": 2048, 00:10:54.907 "data_size": 63488 00:10:54.907 } 00:10:54.907 ] 00:10:54.907 }' 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.907 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.165 [2024-11-26 20:23:48.675276] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.165 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.422 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.422 "name": "Existed_Raid", 00:10:55.422 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:55.422 "strip_size_kb": 64, 00:10:55.422 "state": "configuring", 00:10:55.422 "raid_level": "raid0", 00:10:55.422 "superblock": true, 00:10:55.422 "num_base_bdevs": 4, 00:10:55.422 "num_base_bdevs_discovered": 2, 00:10:55.422 "num_base_bdevs_operational": 4, 00:10:55.422 "base_bdevs_list": [ 00:10:55.422 { 00:10:55.422 "name": null, 00:10:55.422 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:55.422 "is_configured": false, 00:10:55.422 "data_offset": 0, 00:10:55.422 "data_size": 63488 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "name": null, 00:10:55.422 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:55.422 "is_configured": false, 00:10:55.422 "data_offset": 0, 00:10:55.422 "data_size": 63488 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "name": "BaseBdev3", 00:10:55.422 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:55.422 "is_configured": true, 00:10:55.422 "data_offset": 2048, 00:10:55.422 "data_size": 63488 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "name": "BaseBdev4", 00:10:55.422 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:55.422 "is_configured": true, 00:10:55.422 "data_offset": 2048, 00:10:55.422 "data_size": 63488 00:10:55.422 } 00:10:55.422 ] 00:10:55.422 }' 00:10:55.422 20:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.422 20:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.682 [2024-11-26 20:23:49.142653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.682 "name": "Existed_Raid", 00:10:55.682 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:55.682 "strip_size_kb": 64, 00:10:55.682 "state": "configuring", 00:10:55.682 "raid_level": "raid0", 00:10:55.682 "superblock": true, 00:10:55.682 "num_base_bdevs": 4, 00:10:55.682 "num_base_bdevs_discovered": 3, 00:10:55.682 "num_base_bdevs_operational": 4, 00:10:55.682 "base_bdevs_list": [ 00:10:55.682 { 00:10:55.682 "name": null, 00:10:55.682 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:55.682 "is_configured": false, 00:10:55.682 "data_offset": 0, 00:10:55.682 "data_size": 63488 00:10:55.682 }, 00:10:55.682 { 00:10:55.682 "name": "BaseBdev2", 00:10:55.682 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:55.682 "is_configured": true, 00:10:55.682 "data_offset": 2048, 00:10:55.682 "data_size": 63488 00:10:55.682 }, 00:10:55.682 { 00:10:55.682 "name": "BaseBdev3", 00:10:55.682 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:55.682 "is_configured": true, 00:10:55.682 "data_offset": 2048, 00:10:55.682 "data_size": 63488 00:10:55.682 }, 00:10:55.682 { 00:10:55.682 "name": "BaseBdev4", 00:10:55.682 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:55.682 "is_configured": true, 00:10:55.682 "data_offset": 2048, 00:10:55.682 "data_size": 63488 00:10:55.682 } 00:10:55.682 ] 00:10:55.682 }' 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.682 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u df85f377-7ccd-420b-957c-0070079c53ba 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.251 [2024-11-26 20:23:49.702313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:56.251 [2024-11-26 20:23:49.702557] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:56.251 [2024-11-26 20:23:49.702582] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:56.251 NewBaseBdev 00:10:56.251 [2024-11-26 20:23:49.702918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:56.251 [2024-11-26 20:23:49.703076] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:56.251 [2024-11-26 20:23:49.703093] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:56.251 [2024-11-26 20:23:49.703218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.251 [ 00:10:56.251 { 00:10:56.251 "name": "NewBaseBdev", 00:10:56.251 "aliases": [ 00:10:56.251 "df85f377-7ccd-420b-957c-0070079c53ba" 00:10:56.251 ], 00:10:56.251 "product_name": "Malloc disk", 00:10:56.251 "block_size": 512, 00:10:56.251 "num_blocks": 65536, 00:10:56.251 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:56.251 "assigned_rate_limits": { 00:10:56.251 "rw_ios_per_sec": 0, 00:10:56.251 "rw_mbytes_per_sec": 0, 00:10:56.251 "r_mbytes_per_sec": 0, 00:10:56.251 "w_mbytes_per_sec": 0 00:10:56.251 }, 00:10:56.251 "claimed": true, 00:10:56.251 "claim_type": "exclusive_write", 00:10:56.251 "zoned": false, 00:10:56.251 "supported_io_types": { 00:10:56.251 "read": true, 00:10:56.251 "write": true, 00:10:56.251 "unmap": true, 00:10:56.251 "flush": true, 00:10:56.251 "reset": true, 00:10:56.251 "nvme_admin": false, 00:10:56.251 "nvme_io": false, 00:10:56.251 "nvme_io_md": false, 00:10:56.251 "write_zeroes": true, 00:10:56.251 "zcopy": true, 00:10:56.251 "get_zone_info": false, 00:10:56.251 "zone_management": false, 00:10:56.251 "zone_append": false, 00:10:56.251 "compare": false, 00:10:56.251 "compare_and_write": false, 00:10:56.251 "abort": true, 00:10:56.251 "seek_hole": false, 00:10:56.251 "seek_data": false, 00:10:56.251 "copy": true, 00:10:56.251 "nvme_iov_md": false 00:10:56.251 }, 00:10:56.251 "memory_domains": [ 00:10:56.251 { 00:10:56.251 "dma_device_id": "system", 00:10:56.251 "dma_device_type": 1 00:10:56.251 }, 00:10:56.251 { 00:10:56.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.251 "dma_device_type": 2 00:10:56.251 } 00:10:56.251 ], 00:10:56.251 "driver_specific": {} 00:10:56.251 } 00:10:56.251 ] 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.251 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.252 "name": "Existed_Raid", 00:10:56.252 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:56.252 "strip_size_kb": 64, 00:10:56.252 "state": "online", 00:10:56.252 "raid_level": "raid0", 00:10:56.252 "superblock": true, 00:10:56.252 "num_base_bdevs": 4, 00:10:56.252 "num_base_bdevs_discovered": 4, 00:10:56.252 "num_base_bdevs_operational": 4, 00:10:56.252 "base_bdevs_list": [ 00:10:56.252 { 00:10:56.252 "name": "NewBaseBdev", 00:10:56.252 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:56.252 "is_configured": true, 00:10:56.252 "data_offset": 2048, 00:10:56.252 "data_size": 63488 00:10:56.252 }, 00:10:56.252 { 00:10:56.252 "name": "BaseBdev2", 00:10:56.252 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:56.252 "is_configured": true, 00:10:56.252 "data_offset": 2048, 00:10:56.252 "data_size": 63488 00:10:56.252 }, 00:10:56.252 { 00:10:56.252 "name": "BaseBdev3", 00:10:56.252 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:56.252 "is_configured": true, 00:10:56.252 "data_offset": 2048, 00:10:56.252 "data_size": 63488 00:10:56.252 }, 00:10:56.252 { 00:10:56.252 "name": "BaseBdev4", 00:10:56.252 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:56.252 "is_configured": true, 00:10:56.252 "data_offset": 2048, 00:10:56.252 "data_size": 63488 00:10:56.252 } 00:10:56.252 ] 00:10:56.252 }' 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.252 20:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.828 [2024-11-26 20:23:50.230047] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.828 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.828 "name": "Existed_Raid", 00:10:56.828 "aliases": [ 00:10:56.828 "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87" 00:10:56.828 ], 00:10:56.828 "product_name": "Raid Volume", 00:10:56.828 "block_size": 512, 00:10:56.828 "num_blocks": 253952, 00:10:56.828 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:56.828 "assigned_rate_limits": { 00:10:56.828 "rw_ios_per_sec": 0, 00:10:56.828 "rw_mbytes_per_sec": 0, 00:10:56.828 "r_mbytes_per_sec": 0, 00:10:56.828 "w_mbytes_per_sec": 0 00:10:56.828 }, 00:10:56.828 "claimed": false, 00:10:56.828 "zoned": false, 00:10:56.828 "supported_io_types": { 00:10:56.828 "read": true, 00:10:56.828 "write": true, 00:10:56.828 "unmap": true, 00:10:56.828 "flush": true, 00:10:56.828 "reset": true, 00:10:56.828 "nvme_admin": false, 00:10:56.828 "nvme_io": false, 00:10:56.828 "nvme_io_md": false, 00:10:56.828 "write_zeroes": true, 00:10:56.828 "zcopy": false, 00:10:56.828 "get_zone_info": false, 00:10:56.828 "zone_management": false, 00:10:56.828 "zone_append": false, 00:10:56.828 "compare": false, 00:10:56.828 "compare_and_write": false, 00:10:56.828 "abort": false, 00:10:56.828 "seek_hole": false, 00:10:56.828 "seek_data": false, 00:10:56.828 "copy": false, 00:10:56.828 "nvme_iov_md": false 00:10:56.828 }, 00:10:56.828 "memory_domains": [ 00:10:56.828 { 00:10:56.828 "dma_device_id": "system", 00:10:56.828 "dma_device_type": 1 00:10:56.828 }, 00:10:56.828 { 00:10:56.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.828 "dma_device_type": 2 00:10:56.828 }, 00:10:56.828 { 00:10:56.828 "dma_device_id": "system", 00:10:56.828 "dma_device_type": 1 00:10:56.828 }, 00:10:56.828 { 00:10:56.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.828 "dma_device_type": 2 00:10:56.828 }, 00:10:56.828 { 00:10:56.828 "dma_device_id": "system", 00:10:56.828 "dma_device_type": 1 00:10:56.828 }, 00:10:56.828 { 00:10:56.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.828 "dma_device_type": 2 00:10:56.828 }, 00:10:56.828 { 00:10:56.828 "dma_device_id": "system", 00:10:56.828 "dma_device_type": 1 00:10:56.828 }, 00:10:56.828 { 00:10:56.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.828 "dma_device_type": 2 00:10:56.828 } 00:10:56.828 ], 00:10:56.828 "driver_specific": { 00:10:56.828 "raid": { 00:10:56.828 "uuid": "2a5ab80c-8860-4d2e-ba44-8fff3c91bd87", 00:10:56.828 "strip_size_kb": 64, 00:10:56.828 "state": "online", 00:10:56.828 "raid_level": "raid0", 00:10:56.828 "superblock": true, 00:10:56.828 "num_base_bdevs": 4, 00:10:56.828 "num_base_bdevs_discovered": 4, 00:10:56.828 "num_base_bdevs_operational": 4, 00:10:56.828 "base_bdevs_list": [ 00:10:56.828 { 00:10:56.828 "name": "NewBaseBdev", 00:10:56.828 "uuid": "df85f377-7ccd-420b-957c-0070079c53ba", 00:10:56.828 "is_configured": true, 00:10:56.828 "data_offset": 2048, 00:10:56.828 "data_size": 63488 00:10:56.828 }, 00:10:56.828 { 00:10:56.828 "name": "BaseBdev2", 00:10:56.828 "uuid": "62c2d292-ed09-47f1-a91e-36c9a20b73a9", 00:10:56.828 "is_configured": true, 00:10:56.828 "data_offset": 2048, 00:10:56.829 "data_size": 63488 00:10:56.829 }, 00:10:56.829 { 00:10:56.829 "name": "BaseBdev3", 00:10:56.829 "uuid": "4a5d6bae-ac72-49fd-97c3-8343df849f6b", 00:10:56.829 "is_configured": true, 00:10:56.829 "data_offset": 2048, 00:10:56.829 "data_size": 63488 00:10:56.829 }, 00:10:56.829 { 00:10:56.829 "name": "BaseBdev4", 00:10:56.829 "uuid": "4d258610-33b4-4205-9d6e-738cda9ce0db", 00:10:56.829 "is_configured": true, 00:10:56.829 "data_offset": 2048, 00:10:56.829 "data_size": 63488 00:10:56.829 } 00:10:56.829 ] 00:10:56.829 } 00:10:56.829 } 00:10:56.829 }' 00:10:56.829 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.829 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:56.829 BaseBdev2 00:10:56.829 BaseBdev3 00:10:56.829 BaseBdev4' 00:10:56.829 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.090 [2024-11-26 20:23:50.573021] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.090 [2024-11-26 20:23:50.573064] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.090 [2024-11-26 20:23:50.573180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.090 [2024-11-26 20:23:50.573275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.090 [2024-11-26 20:23:50.573296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81467 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81467 ']' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81467 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81467 00:10:57.090 killing process with pid 81467 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81467' 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81467 00:10:57.090 20:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81467 00:10:57.090 [2024-11-26 20:23:50.609582] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.350 [2024-11-26 20:23:50.675189] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.609 20:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:57.609 00:10:57.609 real 0m10.286s 00:10:57.609 user 0m17.339s 00:10:57.609 sys 0m2.219s 00:10:57.609 20:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.609 20:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.609 ************************************ 00:10:57.609 END TEST raid_state_function_test_sb 00:10:57.609 ************************************ 00:10:57.609 20:23:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:57.609 20:23:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:57.609 20:23:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.609 20:23:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.609 ************************************ 00:10:57.609 START TEST raid_superblock_test 00:10:57.609 ************************************ 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82126 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82126 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 82126 ']' 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.609 20:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.868 [2024-11-26 20:23:51.233993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:57.868 [2024-11-26 20:23:51.234168] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82126 ] 00:10:57.868 [2024-11-26 20:23:51.402470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.128 [2024-11-26 20:23:51.486592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.128 [2024-11-26 20:23:51.566848] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.128 [2024-11-26 20:23:51.566915] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.698 malloc1 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.698 [2024-11-26 20:23:52.160280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:58.698 [2024-11-26 20:23:52.160412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.698 [2024-11-26 20:23:52.160448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:58.698 [2024-11-26 20:23:52.160473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.698 [2024-11-26 20:23:52.163154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.698 [2024-11-26 20:23:52.163214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:58.698 pt1 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.698 malloc2 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.698 [2024-11-26 20:23:52.204597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.698 [2024-11-26 20:23:52.204717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.698 [2024-11-26 20:23:52.204746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:58.698 [2024-11-26 20:23:52.204767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.698 [2024-11-26 20:23:52.208075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.698 [2024-11-26 20:23:52.208144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.698 pt2 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.698 malloc3 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.698 [2024-11-26 20:23:52.240070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.698 [2024-11-26 20:23:52.240167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.698 [2024-11-26 20:23:52.240195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:58.698 [2024-11-26 20:23:52.240210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.698 [2024-11-26 20:23:52.242682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.698 [2024-11-26 20:23:52.242730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.698 pt3 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:58.698 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.958 malloc4 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.958 [2024-11-26 20:23:52.270462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:58.958 [2024-11-26 20:23:52.270556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.958 [2024-11-26 20:23:52.270583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:58.958 [2024-11-26 20:23:52.270601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.958 [2024-11-26 20:23:52.273278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.958 [2024-11-26 20:23:52.273345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:58.958 pt4 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.958 [2024-11-26 20:23:52.282568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:58.958 [2024-11-26 20:23:52.284747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.958 [2024-11-26 20:23:52.284830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.958 [2024-11-26 20:23:52.284911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:58.958 [2024-11-26 20:23:52.285105] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:58.958 [2024-11-26 20:23:52.285133] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.958 [2024-11-26 20:23:52.285479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:58.958 [2024-11-26 20:23:52.285712] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:58.958 [2024-11-26 20:23:52.285736] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:58.958 [2024-11-26 20:23:52.285935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.958 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.958 "name": "raid_bdev1", 00:10:58.958 "uuid": "ef3dde9a-56f0-4677-96ed-32bb3aa88f95", 00:10:58.958 "strip_size_kb": 64, 00:10:58.958 "state": "online", 00:10:58.958 "raid_level": "raid0", 00:10:58.958 "superblock": true, 00:10:58.958 "num_base_bdevs": 4, 00:10:58.958 "num_base_bdevs_discovered": 4, 00:10:58.958 "num_base_bdevs_operational": 4, 00:10:58.958 "base_bdevs_list": [ 00:10:58.958 { 00:10:58.958 "name": "pt1", 00:10:58.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.958 "is_configured": true, 00:10:58.958 "data_offset": 2048, 00:10:58.958 "data_size": 63488 00:10:58.958 }, 00:10:58.958 { 00:10:58.958 "name": "pt2", 00:10:58.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.958 "is_configured": true, 00:10:58.958 "data_offset": 2048, 00:10:58.958 "data_size": 63488 00:10:58.958 }, 00:10:58.958 { 00:10:58.958 "name": "pt3", 00:10:58.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.958 "is_configured": true, 00:10:58.958 "data_offset": 2048, 00:10:58.959 "data_size": 63488 00:10:58.959 }, 00:10:58.959 { 00:10:58.959 "name": "pt4", 00:10:58.959 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.959 "is_configured": true, 00:10:58.959 "data_offset": 2048, 00:10:58.959 "data_size": 63488 00:10:58.959 } 00:10:58.959 ] 00:10:58.959 }' 00:10:58.959 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.959 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.218 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.218 [2024-11-26 20:23:52.762142] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.478 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.478 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.478 "name": "raid_bdev1", 00:10:59.478 "aliases": [ 00:10:59.478 "ef3dde9a-56f0-4677-96ed-32bb3aa88f95" 00:10:59.478 ], 00:10:59.478 "product_name": "Raid Volume", 00:10:59.478 "block_size": 512, 00:10:59.478 "num_blocks": 253952, 00:10:59.478 "uuid": "ef3dde9a-56f0-4677-96ed-32bb3aa88f95", 00:10:59.478 "assigned_rate_limits": { 00:10:59.478 "rw_ios_per_sec": 0, 00:10:59.478 "rw_mbytes_per_sec": 0, 00:10:59.478 "r_mbytes_per_sec": 0, 00:10:59.478 "w_mbytes_per_sec": 0 00:10:59.478 }, 00:10:59.478 "claimed": false, 00:10:59.478 "zoned": false, 00:10:59.478 "supported_io_types": { 00:10:59.478 "read": true, 00:10:59.478 "write": true, 00:10:59.478 "unmap": true, 00:10:59.478 "flush": true, 00:10:59.478 "reset": true, 00:10:59.478 "nvme_admin": false, 00:10:59.478 "nvme_io": false, 00:10:59.478 "nvme_io_md": false, 00:10:59.478 "write_zeroes": true, 00:10:59.478 "zcopy": false, 00:10:59.478 "get_zone_info": false, 00:10:59.478 "zone_management": false, 00:10:59.478 "zone_append": false, 00:10:59.478 "compare": false, 00:10:59.478 "compare_and_write": false, 00:10:59.478 "abort": false, 00:10:59.478 "seek_hole": false, 00:10:59.478 "seek_data": false, 00:10:59.478 "copy": false, 00:10:59.478 "nvme_iov_md": false 00:10:59.478 }, 00:10:59.478 "memory_domains": [ 00:10:59.478 { 00:10:59.478 "dma_device_id": "system", 00:10:59.478 "dma_device_type": 1 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.478 "dma_device_type": 2 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "dma_device_id": "system", 00:10:59.478 "dma_device_type": 1 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.478 "dma_device_type": 2 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "dma_device_id": "system", 00:10:59.478 "dma_device_type": 1 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.478 "dma_device_type": 2 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "dma_device_id": "system", 00:10:59.478 "dma_device_type": 1 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.478 "dma_device_type": 2 00:10:59.478 } 00:10:59.478 ], 00:10:59.478 "driver_specific": { 00:10:59.478 "raid": { 00:10:59.478 "uuid": "ef3dde9a-56f0-4677-96ed-32bb3aa88f95", 00:10:59.478 "strip_size_kb": 64, 00:10:59.478 "state": "online", 00:10:59.478 "raid_level": "raid0", 00:10:59.478 "superblock": true, 00:10:59.478 "num_base_bdevs": 4, 00:10:59.478 "num_base_bdevs_discovered": 4, 00:10:59.478 "num_base_bdevs_operational": 4, 00:10:59.478 "base_bdevs_list": [ 00:10:59.478 { 00:10:59.478 "name": "pt1", 00:10:59.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.478 "is_configured": true, 00:10:59.478 "data_offset": 2048, 00:10:59.478 "data_size": 63488 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "name": "pt2", 00:10:59.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.478 "is_configured": true, 00:10:59.478 "data_offset": 2048, 00:10:59.478 "data_size": 63488 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "name": "pt3", 00:10:59.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.478 "is_configured": true, 00:10:59.478 "data_offset": 2048, 00:10:59.478 "data_size": 63488 00:10:59.478 }, 00:10:59.478 { 00:10:59.478 "name": "pt4", 00:10:59.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.478 "is_configured": true, 00:10:59.478 "data_offset": 2048, 00:10:59.478 "data_size": 63488 00:10:59.479 } 00:10:59.479 ] 00:10:59.479 } 00:10:59.479 } 00:10:59.479 }' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:59.479 pt2 00:10:59.479 pt3 00:10:59.479 pt4' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.479 20:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.479 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.479 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.479 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.479 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:59.479 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.479 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.479 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:59.740 [2024-11-26 20:23:53.073531] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ef3dde9a-56f0-4677-96ed-32bb3aa88f95 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ef3dde9a-56f0-4677-96ed-32bb3aa88f95 ']' 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 [2024-11-26 20:23:53.125116] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.740 [2024-11-26 20:23:53.125239] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.740 [2024-11-26 20:23:53.125405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.740 [2024-11-26 20:23:53.125541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.740 [2024-11-26 20:23:53.125612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.740 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.740 [2024-11-26 20:23:53.280906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:59.740 [2024-11-26 20:23:53.283054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:59.740 [2024-11-26 20:23:53.283165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:59.740 [2024-11-26 20:23:53.283244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:59.740 [2024-11-26 20:23:53.283371] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:59.740 [2024-11-26 20:23:53.283500] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:59.740 [2024-11-26 20:23:53.283585] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:59.740 [2024-11-26 20:23:53.283678] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:59.740 [2024-11-26 20:23:53.283745] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.740 [2024-11-26 20:23:53.283784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:59.740 request: 00:10:59.740 { 00:10:59.740 "name": "raid_bdev1", 00:10:59.740 "raid_level": "raid0", 00:10:59.740 "base_bdevs": [ 00:10:59.740 "malloc1", 00:10:59.740 "malloc2", 00:10:59.740 "malloc3", 00:10:59.740 "malloc4" 00:10:59.740 ], 00:10:59.740 "strip_size_kb": 64, 00:10:59.740 "superblock": false, 00:10:59.740 "method": "bdev_raid_create", 00:10:59.740 "req_id": 1 00:10:59.740 } 00:10:59.740 Got JSON-RPC error response 00:10:59.740 response: 00:10:59.740 { 00:10:59.740 "code": -17, 00:10:59.740 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:59.740 } 00:11:00.000 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:00.000 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:00.000 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:00.000 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.001 [2024-11-26 20:23:53.348783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.001 [2024-11-26 20:23:53.348868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.001 [2024-11-26 20:23:53.348897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:00.001 [2024-11-26 20:23:53.348910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.001 [2024-11-26 20:23:53.351358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.001 [2024-11-26 20:23:53.351403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.001 [2024-11-26 20:23:53.351505] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:00.001 [2024-11-26 20:23:53.351555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.001 pt1 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.001 "name": "raid_bdev1", 00:11:00.001 "uuid": "ef3dde9a-56f0-4677-96ed-32bb3aa88f95", 00:11:00.001 "strip_size_kb": 64, 00:11:00.001 "state": "configuring", 00:11:00.001 "raid_level": "raid0", 00:11:00.001 "superblock": true, 00:11:00.001 "num_base_bdevs": 4, 00:11:00.001 "num_base_bdevs_discovered": 1, 00:11:00.001 "num_base_bdevs_operational": 4, 00:11:00.001 "base_bdevs_list": [ 00:11:00.001 { 00:11:00.001 "name": "pt1", 00:11:00.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.001 "is_configured": true, 00:11:00.001 "data_offset": 2048, 00:11:00.001 "data_size": 63488 00:11:00.001 }, 00:11:00.001 { 00:11:00.001 "name": null, 00:11:00.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.001 "is_configured": false, 00:11:00.001 "data_offset": 2048, 00:11:00.001 "data_size": 63488 00:11:00.001 }, 00:11:00.001 { 00:11:00.001 "name": null, 00:11:00.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.001 "is_configured": false, 00:11:00.001 "data_offset": 2048, 00:11:00.001 "data_size": 63488 00:11:00.001 }, 00:11:00.001 { 00:11:00.001 "name": null, 00:11:00.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.001 "is_configured": false, 00:11:00.001 "data_offset": 2048, 00:11:00.001 "data_size": 63488 00:11:00.001 } 00:11:00.001 ] 00:11:00.001 }' 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.001 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.570 [2024-11-26 20:23:53.844011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.570 [2024-11-26 20:23:53.844168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.570 [2024-11-26 20:23:53.844216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:00.570 [2024-11-26 20:23:53.844230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.570 [2024-11-26 20:23:53.844750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.570 [2024-11-26 20:23:53.844775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.570 [2024-11-26 20:23:53.844872] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.570 [2024-11-26 20:23:53.844901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.570 pt2 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.570 [2024-11-26 20:23:53.856007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.570 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.570 "name": "raid_bdev1", 00:11:00.570 "uuid": "ef3dde9a-56f0-4677-96ed-32bb3aa88f95", 00:11:00.570 "strip_size_kb": 64, 00:11:00.570 "state": "configuring", 00:11:00.570 "raid_level": "raid0", 00:11:00.570 "superblock": true, 00:11:00.570 "num_base_bdevs": 4, 00:11:00.570 "num_base_bdevs_discovered": 1, 00:11:00.570 "num_base_bdevs_operational": 4, 00:11:00.570 "base_bdevs_list": [ 00:11:00.570 { 00:11:00.570 "name": "pt1", 00:11:00.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.570 "is_configured": true, 00:11:00.570 "data_offset": 2048, 00:11:00.570 "data_size": 63488 00:11:00.570 }, 00:11:00.570 { 00:11:00.570 "name": null, 00:11:00.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.570 "is_configured": false, 00:11:00.570 "data_offset": 0, 00:11:00.570 "data_size": 63488 00:11:00.571 }, 00:11:00.571 { 00:11:00.571 "name": null, 00:11:00.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.571 "is_configured": false, 00:11:00.571 "data_offset": 2048, 00:11:00.571 "data_size": 63488 00:11:00.571 }, 00:11:00.571 { 00:11:00.571 "name": null, 00:11:00.571 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.571 "is_configured": false, 00:11:00.571 "data_offset": 2048, 00:11:00.571 "data_size": 63488 00:11:00.571 } 00:11:00.571 ] 00:11:00.571 }' 00:11:00.571 20:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.571 20:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.830 [2024-11-26 20:23:54.315245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.830 [2024-11-26 20:23:54.315437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.830 [2024-11-26 20:23:54.315498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:00.830 [2024-11-26 20:23:54.315545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.830 [2024-11-26 20:23:54.316105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.830 [2024-11-26 20:23:54.316193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.830 [2024-11-26 20:23:54.316326] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.830 [2024-11-26 20:23:54.316397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.830 pt2 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.830 [2024-11-26 20:23:54.327198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.830 [2024-11-26 20:23:54.327391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.830 [2024-11-26 20:23:54.327458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:00.830 [2024-11-26 20:23:54.327509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.830 [2024-11-26 20:23:54.328067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.830 [2024-11-26 20:23:54.328159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.830 [2024-11-26 20:23:54.328297] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:00.830 [2024-11-26 20:23:54.328335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.830 pt3 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.830 [2024-11-26 20:23:54.339165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:00.830 [2024-11-26 20:23:54.339305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.830 [2024-11-26 20:23:54.339372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:00.830 [2024-11-26 20:23:54.339418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.830 [2024-11-26 20:23:54.339984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.830 [2024-11-26 20:23:54.340066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:00.830 [2024-11-26 20:23:54.340208] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:00.830 [2024-11-26 20:23:54.340283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:00.830 [2024-11-26 20:23:54.340480] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:00.830 [2024-11-26 20:23:54.340554] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:00.830 [2024-11-26 20:23:54.340921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:00.830 [2024-11-26 20:23:54.341132] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:00.830 [2024-11-26 20:23:54.341188] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:00.830 [2024-11-26 20:23:54.341385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.830 pt4 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.830 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.831 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.831 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.831 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.831 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.831 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.831 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.831 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.831 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.115 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.115 "name": "raid_bdev1", 00:11:01.115 "uuid": "ef3dde9a-56f0-4677-96ed-32bb3aa88f95", 00:11:01.115 "strip_size_kb": 64, 00:11:01.115 "state": "online", 00:11:01.115 "raid_level": "raid0", 00:11:01.115 "superblock": true, 00:11:01.115 "num_base_bdevs": 4, 00:11:01.115 "num_base_bdevs_discovered": 4, 00:11:01.115 "num_base_bdevs_operational": 4, 00:11:01.115 "base_bdevs_list": [ 00:11:01.115 { 00:11:01.115 "name": "pt1", 00:11:01.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.115 "is_configured": true, 00:11:01.115 "data_offset": 2048, 00:11:01.115 "data_size": 63488 00:11:01.115 }, 00:11:01.115 { 00:11:01.115 "name": "pt2", 00:11:01.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.115 "is_configured": true, 00:11:01.115 "data_offset": 2048, 00:11:01.115 "data_size": 63488 00:11:01.115 }, 00:11:01.115 { 00:11:01.115 "name": "pt3", 00:11:01.115 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.115 "is_configured": true, 00:11:01.115 "data_offset": 2048, 00:11:01.115 "data_size": 63488 00:11:01.115 }, 00:11:01.115 { 00:11:01.115 "name": "pt4", 00:11:01.115 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.115 "is_configured": true, 00:11:01.115 "data_offset": 2048, 00:11:01.115 "data_size": 63488 00:11:01.115 } 00:11:01.115 ] 00:11:01.115 }' 00:11:01.115 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.115 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.374 [2024-11-26 20:23:54.742903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.374 "name": "raid_bdev1", 00:11:01.374 "aliases": [ 00:11:01.374 "ef3dde9a-56f0-4677-96ed-32bb3aa88f95" 00:11:01.374 ], 00:11:01.374 "product_name": "Raid Volume", 00:11:01.374 "block_size": 512, 00:11:01.374 "num_blocks": 253952, 00:11:01.374 "uuid": "ef3dde9a-56f0-4677-96ed-32bb3aa88f95", 00:11:01.374 "assigned_rate_limits": { 00:11:01.374 "rw_ios_per_sec": 0, 00:11:01.374 "rw_mbytes_per_sec": 0, 00:11:01.374 "r_mbytes_per_sec": 0, 00:11:01.374 "w_mbytes_per_sec": 0 00:11:01.374 }, 00:11:01.374 "claimed": false, 00:11:01.374 "zoned": false, 00:11:01.374 "supported_io_types": { 00:11:01.374 "read": true, 00:11:01.374 "write": true, 00:11:01.374 "unmap": true, 00:11:01.374 "flush": true, 00:11:01.374 "reset": true, 00:11:01.374 "nvme_admin": false, 00:11:01.374 "nvme_io": false, 00:11:01.374 "nvme_io_md": false, 00:11:01.374 "write_zeroes": true, 00:11:01.374 "zcopy": false, 00:11:01.374 "get_zone_info": false, 00:11:01.374 "zone_management": false, 00:11:01.374 "zone_append": false, 00:11:01.374 "compare": false, 00:11:01.374 "compare_and_write": false, 00:11:01.374 "abort": false, 00:11:01.374 "seek_hole": false, 00:11:01.374 "seek_data": false, 00:11:01.374 "copy": false, 00:11:01.374 "nvme_iov_md": false 00:11:01.374 }, 00:11:01.374 "memory_domains": [ 00:11:01.374 { 00:11:01.374 "dma_device_id": "system", 00:11:01.374 "dma_device_type": 1 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.374 "dma_device_type": 2 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "dma_device_id": "system", 00:11:01.374 "dma_device_type": 1 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.374 "dma_device_type": 2 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "dma_device_id": "system", 00:11:01.374 "dma_device_type": 1 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.374 "dma_device_type": 2 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "dma_device_id": "system", 00:11:01.374 "dma_device_type": 1 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.374 "dma_device_type": 2 00:11:01.374 } 00:11:01.374 ], 00:11:01.374 "driver_specific": { 00:11:01.374 "raid": { 00:11:01.374 "uuid": "ef3dde9a-56f0-4677-96ed-32bb3aa88f95", 00:11:01.374 "strip_size_kb": 64, 00:11:01.374 "state": "online", 00:11:01.374 "raid_level": "raid0", 00:11:01.374 "superblock": true, 00:11:01.374 "num_base_bdevs": 4, 00:11:01.374 "num_base_bdevs_discovered": 4, 00:11:01.374 "num_base_bdevs_operational": 4, 00:11:01.374 "base_bdevs_list": [ 00:11:01.374 { 00:11:01.374 "name": "pt1", 00:11:01.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.374 "is_configured": true, 00:11:01.374 "data_offset": 2048, 00:11:01.374 "data_size": 63488 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "name": "pt2", 00:11:01.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.374 "is_configured": true, 00:11:01.374 "data_offset": 2048, 00:11:01.374 "data_size": 63488 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "name": "pt3", 00:11:01.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.374 "is_configured": true, 00:11:01.374 "data_offset": 2048, 00:11:01.374 "data_size": 63488 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "name": "pt4", 00:11:01.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.374 "is_configured": true, 00:11:01.374 "data_offset": 2048, 00:11:01.374 "data_size": 63488 00:11:01.374 } 00:11:01.374 ] 00:11:01.374 } 00:11:01.374 } 00:11:01.374 }' 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.374 pt2 00:11:01.374 pt3 00:11:01.374 pt4' 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.374 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.633 20:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 [2024-11-26 20:23:55.102303] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ef3dde9a-56f0-4677-96ed-32bb3aa88f95 '!=' ef3dde9a-56f0-4677-96ed-32bb3aa88f95 ']' 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82126 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 82126 ']' 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 82126 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82126 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82126' 00:11:01.633 killing process with pid 82126 00:11:01.633 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 82126 00:11:01.633 [2024-11-26 20:23:55.177463] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.634 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 82126 00:11:01.634 [2024-11-26 20:23:55.177724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.634 [2024-11-26 20:23:55.177811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.634 [2024-11-26 20:23:55.177900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:01.892 [2024-11-26 20:23:55.245211] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.151 20:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:02.151 00:11:02.151 real 0m4.495s 00:11:02.151 user 0m6.957s 00:11:02.151 sys 0m1.008s 00:11:02.151 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.151 20:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.151 ************************************ 00:11:02.151 END TEST raid_superblock_test 00:11:02.151 ************************************ 00:11:02.151 20:23:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:02.151 20:23:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:02.151 20:23:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.151 20:23:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.151 ************************************ 00:11:02.151 START TEST raid_read_error_test 00:11:02.151 ************************************ 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:02.151 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.elQp1KKNge 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82374 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82374 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82374 ']' 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.411 20:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.411 [2024-11-26 20:23:55.785305] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:02.411 [2024-11-26 20:23:55.785545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82374 ] 00:11:02.411 [2024-11-26 20:23:55.949012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.712 [2024-11-26 20:23:56.033867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.712 [2024-11-26 20:23:56.113757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.712 [2024-11-26 20:23:56.113899] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 BaseBdev1_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 true 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 [2024-11-26 20:23:56.686341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:03.279 [2024-11-26 20:23:56.686466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.279 [2024-11-26 20:23:56.686497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:03.279 [2024-11-26 20:23:56.686508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.279 [2024-11-26 20:23:56.689134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.279 [2024-11-26 20:23:56.689196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:03.279 BaseBdev1 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 BaseBdev2_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 true 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 [2024-11-26 20:23:56.737200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.279 [2024-11-26 20:23:56.737266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.279 [2024-11-26 20:23:56.737290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:03.279 [2024-11-26 20:23:56.737300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.279 [2024-11-26 20:23:56.739560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.279 [2024-11-26 20:23:56.739599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.279 BaseBdev2 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 BaseBdev3_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 true 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.279 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 [2024-11-26 20:23:56.784492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:03.280 [2024-11-26 20:23:56.784559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.280 [2024-11-26 20:23:56.784599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:03.280 [2024-11-26 20:23:56.784610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.280 [2024-11-26 20:23:56.786981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.280 [2024-11-26 20:23:56.787059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:03.280 BaseBdev3 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.280 BaseBdev4_malloc 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.280 true 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.280 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.280 [2024-11-26 20:23:56.826243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:03.280 [2024-11-26 20:23:56.826355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.280 [2024-11-26 20:23:56.826402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:03.280 [2024-11-26 20:23:56.826411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.280 [2024-11-26 20:23:56.828745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.280 [2024-11-26 20:23:56.828782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:03.537 BaseBdev4 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.537 [2024-11-26 20:23:56.838274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.537 [2024-11-26 20:23:56.840324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.537 [2024-11-26 20:23:56.840414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.537 [2024-11-26 20:23:56.840470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.537 [2024-11-26 20:23:56.840729] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:03.537 [2024-11-26 20:23:56.840760] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.537 [2024-11-26 20:23:56.841056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:03.537 [2024-11-26 20:23:56.841217] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:03.537 [2024-11-26 20:23:56.841229] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:03.537 [2024-11-26 20:23:56.841393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.537 "name": "raid_bdev1", 00:11:03.537 "uuid": "299b6d7e-0a3b-4a9f-8409-112d644c95cc", 00:11:03.537 "strip_size_kb": 64, 00:11:03.537 "state": "online", 00:11:03.537 "raid_level": "raid0", 00:11:03.537 "superblock": true, 00:11:03.537 "num_base_bdevs": 4, 00:11:03.537 "num_base_bdevs_discovered": 4, 00:11:03.537 "num_base_bdevs_operational": 4, 00:11:03.537 "base_bdevs_list": [ 00:11:03.537 { 00:11:03.537 "name": "BaseBdev1", 00:11:03.537 "uuid": "8014e324-4ed6-516b-bb38-19239dab92bd", 00:11:03.537 "is_configured": true, 00:11:03.537 "data_offset": 2048, 00:11:03.537 "data_size": 63488 00:11:03.537 }, 00:11:03.537 { 00:11:03.537 "name": "BaseBdev2", 00:11:03.537 "uuid": "c4ffa9ac-9c23-5bef-84cb-b131529cf406", 00:11:03.537 "is_configured": true, 00:11:03.537 "data_offset": 2048, 00:11:03.537 "data_size": 63488 00:11:03.537 }, 00:11:03.537 { 00:11:03.537 "name": "BaseBdev3", 00:11:03.537 "uuid": "6151b06c-45d6-5702-825f-cb1d0a0b2c80", 00:11:03.537 "is_configured": true, 00:11:03.537 "data_offset": 2048, 00:11:03.537 "data_size": 63488 00:11:03.537 }, 00:11:03.537 { 00:11:03.537 "name": "BaseBdev4", 00:11:03.537 "uuid": "41fd9541-0569-5fcd-bdb2-29f5964b031c", 00:11:03.537 "is_configured": true, 00:11:03.537 "data_offset": 2048, 00:11:03.537 "data_size": 63488 00:11:03.537 } 00:11:03.537 ] 00:11:03.537 }' 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.537 20:23:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.795 20:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:03.795 20:23:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:04.052 [2024-11-26 20:23:57.353840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.987 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.987 "name": "raid_bdev1", 00:11:04.987 "uuid": "299b6d7e-0a3b-4a9f-8409-112d644c95cc", 00:11:04.987 "strip_size_kb": 64, 00:11:04.987 "state": "online", 00:11:04.987 "raid_level": "raid0", 00:11:04.987 "superblock": true, 00:11:04.987 "num_base_bdevs": 4, 00:11:04.987 "num_base_bdevs_discovered": 4, 00:11:04.987 "num_base_bdevs_operational": 4, 00:11:04.987 "base_bdevs_list": [ 00:11:04.987 { 00:11:04.987 "name": "BaseBdev1", 00:11:04.987 "uuid": "8014e324-4ed6-516b-bb38-19239dab92bd", 00:11:04.987 "is_configured": true, 00:11:04.987 "data_offset": 2048, 00:11:04.987 "data_size": 63488 00:11:04.987 }, 00:11:04.987 { 00:11:04.987 "name": "BaseBdev2", 00:11:04.987 "uuid": "c4ffa9ac-9c23-5bef-84cb-b131529cf406", 00:11:04.987 "is_configured": true, 00:11:04.987 "data_offset": 2048, 00:11:04.987 "data_size": 63488 00:11:04.988 }, 00:11:04.988 { 00:11:04.988 "name": "BaseBdev3", 00:11:04.988 "uuid": "6151b06c-45d6-5702-825f-cb1d0a0b2c80", 00:11:04.988 "is_configured": true, 00:11:04.988 "data_offset": 2048, 00:11:04.988 "data_size": 63488 00:11:04.988 }, 00:11:04.988 { 00:11:04.988 "name": "BaseBdev4", 00:11:04.988 "uuid": "41fd9541-0569-5fcd-bdb2-29f5964b031c", 00:11:04.988 "is_configured": true, 00:11:04.988 "data_offset": 2048, 00:11:04.988 "data_size": 63488 00:11:04.988 } 00:11:04.988 ] 00:11:04.988 }' 00:11:04.988 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.988 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.247 [2024-11-26 20:23:58.734822] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.247 [2024-11-26 20:23:58.734931] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.247 [2024-11-26 20:23:58.737882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.247 [2024-11-26 20:23:58.737994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.247 [2024-11-26 20:23:58.738054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.247 [2024-11-26 20:23:58.738065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:05.247 { 00:11:05.247 "results": [ 00:11:05.247 { 00:11:05.247 "job": "raid_bdev1", 00:11:05.247 "core_mask": "0x1", 00:11:05.247 "workload": "randrw", 00:11:05.247 "percentage": 50, 00:11:05.247 "status": "finished", 00:11:05.247 "queue_depth": 1, 00:11:05.247 "io_size": 131072, 00:11:05.247 "runtime": 1.381667, 00:11:05.247 "iops": 13095.774886423429, 00:11:05.247 "mibps": 1636.9718608029286, 00:11:05.247 "io_failed": 1, 00:11:05.247 "io_timeout": 0, 00:11:05.247 "avg_latency_us": 104.3343846342267, 00:11:05.247 "min_latency_us": 27.72401746724891, 00:11:05.247 "max_latency_us": 2919.0707423580784 00:11:05.247 } 00:11:05.247 ], 00:11:05.247 "core_count": 1 00:11:05.247 } 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82374 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82374 ']' 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82374 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82374 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82374' 00:11:05.247 killing process with pid 82374 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82374 00:11:05.247 [2024-11-26 20:23:58.775597] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.247 20:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82374 00:11:05.506 [2024-11-26 20:23:58.833223] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.elQp1KKNge 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:05.765 ************************************ 00:11:05.765 END TEST raid_read_error_test 00:11:05.765 ************************************ 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:05.765 00:11:05.765 real 0m3.524s 00:11:05.765 user 0m4.320s 00:11:05.765 sys 0m0.635s 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.765 20:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.765 20:23:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:05.765 20:23:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:05.765 20:23:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.765 20:23:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.765 ************************************ 00:11:05.765 START TEST raid_write_error_test 00:11:05.765 ************************************ 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7Z25D2GR6E 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82509 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82509 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82509 ']' 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.765 20:23:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.023 [2024-11-26 20:23:59.386481] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:06.023 [2024-11-26 20:23:59.386729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82509 ] 00:11:06.023 [2024-11-26 20:23:59.535780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.283 [2024-11-26 20:23:59.617386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.283 [2024-11-26 20:23:59.690569] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.283 [2024-11-26 20:23:59.690715] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.851 BaseBdev1_malloc 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.851 true 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.851 [2024-11-26 20:24:00.331551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:06.851 [2024-11-26 20:24:00.331606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.851 [2024-11-26 20:24:00.331642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:06.851 [2024-11-26 20:24:00.331680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.851 [2024-11-26 20:24:00.334086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.851 [2024-11-26 20:24:00.334174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:06.851 BaseBdev1 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.851 BaseBdev2_malloc 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.851 true 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.851 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.851 [2024-11-26 20:24:00.385877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:06.851 [2024-11-26 20:24:00.385941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.851 [2024-11-26 20:24:00.385961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:06.851 [2024-11-26 20:24:00.385970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.852 [2024-11-26 20:24:00.388183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.852 [2024-11-26 20:24:00.388271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:06.852 BaseBdev2 00:11:06.852 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.852 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.852 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:06.852 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.852 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.111 BaseBdev3_malloc 00:11:07.111 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.111 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:07.111 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.111 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.111 true 00:11:07.111 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.111 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:07.111 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.111 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.111 [2024-11-26 20:24:00.427102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:07.111 [2024-11-26 20:24:00.427165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.112 [2024-11-26 20:24:00.427188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:07.112 [2024-11-26 20:24:00.427199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.112 [2024-11-26 20:24:00.429570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.112 [2024-11-26 20:24:00.429708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:07.112 BaseBdev3 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 BaseBdev4_malloc 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 true 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 [2024-11-26 20:24:00.474171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:07.112 [2024-11-26 20:24:00.474231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.112 [2024-11-26 20:24:00.474258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:07.112 [2024-11-26 20:24:00.474269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.112 [2024-11-26 20:24:00.476674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.112 [2024-11-26 20:24:00.476765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:07.112 BaseBdev4 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 [2024-11-26 20:24:00.486221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.112 [2024-11-26 20:24:00.488338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.112 [2024-11-26 20:24:00.488502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.112 [2024-11-26 20:24:00.488589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.112 [2024-11-26 20:24:00.488833] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:07.112 [2024-11-26 20:24:00.488848] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.112 [2024-11-26 20:24:00.489140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:07.112 [2024-11-26 20:24:00.489302] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:07.112 [2024-11-26 20:24:00.489316] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:07.112 [2024-11-26 20:24:00.489470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.112 "name": "raid_bdev1", 00:11:07.112 "uuid": "37067b2f-914d-48f9-b0ca-5bcfb04454ff", 00:11:07.112 "strip_size_kb": 64, 00:11:07.112 "state": "online", 00:11:07.112 "raid_level": "raid0", 00:11:07.112 "superblock": true, 00:11:07.112 "num_base_bdevs": 4, 00:11:07.112 "num_base_bdevs_discovered": 4, 00:11:07.112 "num_base_bdevs_operational": 4, 00:11:07.112 "base_bdevs_list": [ 00:11:07.112 { 00:11:07.112 "name": "BaseBdev1", 00:11:07.112 "uuid": "5229b90c-c8fe-5bf8-9c60-9f01b15ab6f9", 00:11:07.112 "is_configured": true, 00:11:07.112 "data_offset": 2048, 00:11:07.112 "data_size": 63488 00:11:07.112 }, 00:11:07.112 { 00:11:07.112 "name": "BaseBdev2", 00:11:07.112 "uuid": "9dbe0363-0ad2-56f7-9643-9a2cb8ab5e9a", 00:11:07.112 "is_configured": true, 00:11:07.112 "data_offset": 2048, 00:11:07.112 "data_size": 63488 00:11:07.112 }, 00:11:07.112 { 00:11:07.112 "name": "BaseBdev3", 00:11:07.112 "uuid": "7cb8d2c0-d97c-531c-acde-3a743cce74fc", 00:11:07.112 "is_configured": true, 00:11:07.112 "data_offset": 2048, 00:11:07.112 "data_size": 63488 00:11:07.112 }, 00:11:07.112 { 00:11:07.112 "name": "BaseBdev4", 00:11:07.112 "uuid": "1641a4b9-df69-54fb-8768-327df6658f12", 00:11:07.112 "is_configured": true, 00:11:07.112 "data_offset": 2048, 00:11:07.112 "data_size": 63488 00:11:07.112 } 00:11:07.112 ] 00:11:07.112 }' 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.112 20:24:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.681 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:07.681 20:24:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:07.681 [2024-11-26 20:24:01.065639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.618 20:24:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.618 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.618 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.618 "name": "raid_bdev1", 00:11:08.618 "uuid": "37067b2f-914d-48f9-b0ca-5bcfb04454ff", 00:11:08.618 "strip_size_kb": 64, 00:11:08.618 "state": "online", 00:11:08.618 "raid_level": "raid0", 00:11:08.618 "superblock": true, 00:11:08.618 "num_base_bdevs": 4, 00:11:08.618 "num_base_bdevs_discovered": 4, 00:11:08.618 "num_base_bdevs_operational": 4, 00:11:08.618 "base_bdevs_list": [ 00:11:08.618 { 00:11:08.618 "name": "BaseBdev1", 00:11:08.618 "uuid": "5229b90c-c8fe-5bf8-9c60-9f01b15ab6f9", 00:11:08.618 "is_configured": true, 00:11:08.618 "data_offset": 2048, 00:11:08.618 "data_size": 63488 00:11:08.618 }, 00:11:08.618 { 00:11:08.618 "name": "BaseBdev2", 00:11:08.618 "uuid": "9dbe0363-0ad2-56f7-9643-9a2cb8ab5e9a", 00:11:08.618 "is_configured": true, 00:11:08.618 "data_offset": 2048, 00:11:08.618 "data_size": 63488 00:11:08.618 }, 00:11:08.618 { 00:11:08.618 "name": "BaseBdev3", 00:11:08.618 "uuid": "7cb8d2c0-d97c-531c-acde-3a743cce74fc", 00:11:08.618 "is_configured": true, 00:11:08.618 "data_offset": 2048, 00:11:08.618 "data_size": 63488 00:11:08.618 }, 00:11:08.618 { 00:11:08.618 "name": "BaseBdev4", 00:11:08.618 "uuid": "1641a4b9-df69-54fb-8768-327df6658f12", 00:11:08.618 "is_configured": true, 00:11:08.618 "data_offset": 2048, 00:11:08.618 "data_size": 63488 00:11:08.618 } 00:11:08.618 ] 00:11:08.618 }' 00:11:08.618 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.618 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.186 [2024-11-26 20:24:02.443235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.186 [2024-11-26 20:24:02.443336] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.186 [2024-11-26 20:24:02.446296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.186 [2024-11-26 20:24:02.446388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.186 [2024-11-26 20:24:02.446477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.186 [2024-11-26 20:24:02.446530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:09.186 { 00:11:09.186 "results": [ 00:11:09.186 { 00:11:09.186 "job": "raid_bdev1", 00:11:09.186 "core_mask": "0x1", 00:11:09.186 "workload": "randrw", 00:11:09.186 "percentage": 50, 00:11:09.186 "status": "finished", 00:11:09.186 "queue_depth": 1, 00:11:09.186 "io_size": 131072, 00:11:09.186 "runtime": 1.378165, 00:11:09.186 "iops": 13228.459582125508, 00:11:09.186 "mibps": 1653.5574477656885, 00:11:09.186 "io_failed": 1, 00:11:09.186 "io_timeout": 0, 00:11:09.186 "avg_latency_us": 107.49523406228505, 00:11:09.186 "min_latency_us": 27.50043668122271, 00:11:09.186 "max_latency_us": 1745.7187772925763 00:11:09.186 } 00:11:09.186 ], 00:11:09.186 "core_count": 1 00:11:09.186 } 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82509 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82509 ']' 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82509 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82509 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.186 killing process with pid 82509 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82509' 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82509 00:11:09.186 [2024-11-26 20:24:02.502879] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.186 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82509 00:11:09.186 [2024-11-26 20:24:02.557787] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7Z25D2GR6E 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.447 ************************************ 00:11:09.447 END TEST raid_write_error_test 00:11:09.447 ************************************ 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:09.447 00:11:09.447 real 0m3.653s 00:11:09.447 user 0m4.573s 00:11:09.447 sys 0m0.636s 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.447 20:24:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.447 20:24:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:09.447 20:24:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:09.447 20:24:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:09.447 20:24:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.447 20:24:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.447 ************************************ 00:11:09.447 START TEST raid_state_function_test 00:11:09.447 ************************************ 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82641 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82641' 00:11:09.447 Process raid pid: 82641 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82641 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82641 ']' 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.447 20:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.707 [2024-11-26 20:24:03.059787] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:09.707 [2024-11-26 20:24:03.060045] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.707 [2024-11-26 20:24:03.226505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.967 [2024-11-26 20:24:03.314222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.967 [2024-11-26 20:24:03.394742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.967 [2024-11-26 20:24:03.394810] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.534 20:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.534 20:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:10.534 20:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:10.534 20:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.534 20:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.534 [2024-11-26 20:24:03.996426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.534 [2024-11-26 20:24:03.996489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.534 [2024-11-26 20:24:03.996503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.534 [2024-11-26 20:24:03.996515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.534 [2024-11-26 20:24:03.996523] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.534 [2024-11-26 20:24:03.996538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.534 [2024-11-26 20:24:03.996546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:10.534 [2024-11-26 20:24:03.996566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.534 "name": "Existed_Raid", 00:11:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.534 "strip_size_kb": 64, 00:11:10.534 "state": "configuring", 00:11:10.534 "raid_level": "concat", 00:11:10.534 "superblock": false, 00:11:10.534 "num_base_bdevs": 4, 00:11:10.534 "num_base_bdevs_discovered": 0, 00:11:10.534 "num_base_bdevs_operational": 4, 00:11:10.534 "base_bdevs_list": [ 00:11:10.534 { 00:11:10.534 "name": "BaseBdev1", 00:11:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.534 "is_configured": false, 00:11:10.534 "data_offset": 0, 00:11:10.534 "data_size": 0 00:11:10.534 }, 00:11:10.534 { 00:11:10.534 "name": "BaseBdev2", 00:11:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.534 "is_configured": false, 00:11:10.534 "data_offset": 0, 00:11:10.534 "data_size": 0 00:11:10.534 }, 00:11:10.534 { 00:11:10.534 "name": "BaseBdev3", 00:11:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.534 "is_configured": false, 00:11:10.534 "data_offset": 0, 00:11:10.534 "data_size": 0 00:11:10.534 }, 00:11:10.534 { 00:11:10.534 "name": "BaseBdev4", 00:11:10.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.534 "is_configured": false, 00:11:10.534 "data_offset": 0, 00:11:10.534 "data_size": 0 00:11:10.534 } 00:11:10.534 ] 00:11:10.534 }' 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.534 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.100 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.100 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 [2024-11-26 20:24:04.463515] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.100 [2024-11-26 20:24:04.463645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:11.100 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.100 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.100 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.100 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 [2024-11-26 20:24:04.471572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.101 [2024-11-26 20:24:04.471693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.101 [2024-11-26 20:24:04.471749] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.101 [2024-11-26 20:24:04.471779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.101 [2024-11-26 20:24:04.471810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.101 [2024-11-26 20:24:04.471837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.101 [2024-11-26 20:24:04.471901] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.101 [2024-11-26 20:24:04.471939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.101 [2024-11-26 20:24:04.495749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.101 BaseBdev1 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.101 [ 00:11:11.101 { 00:11:11.101 "name": "BaseBdev1", 00:11:11.101 "aliases": [ 00:11:11.101 "e2126f56-17e4-41bb-b44c-e7af3aec966a" 00:11:11.101 ], 00:11:11.101 "product_name": "Malloc disk", 00:11:11.101 "block_size": 512, 00:11:11.101 "num_blocks": 65536, 00:11:11.101 "uuid": "e2126f56-17e4-41bb-b44c-e7af3aec966a", 00:11:11.101 "assigned_rate_limits": { 00:11:11.101 "rw_ios_per_sec": 0, 00:11:11.101 "rw_mbytes_per_sec": 0, 00:11:11.101 "r_mbytes_per_sec": 0, 00:11:11.101 "w_mbytes_per_sec": 0 00:11:11.101 }, 00:11:11.101 "claimed": true, 00:11:11.101 "claim_type": "exclusive_write", 00:11:11.101 "zoned": false, 00:11:11.101 "supported_io_types": { 00:11:11.101 "read": true, 00:11:11.101 "write": true, 00:11:11.101 "unmap": true, 00:11:11.101 "flush": true, 00:11:11.101 "reset": true, 00:11:11.101 "nvme_admin": false, 00:11:11.101 "nvme_io": false, 00:11:11.101 "nvme_io_md": false, 00:11:11.101 "write_zeroes": true, 00:11:11.101 "zcopy": true, 00:11:11.101 "get_zone_info": false, 00:11:11.101 "zone_management": false, 00:11:11.101 "zone_append": false, 00:11:11.101 "compare": false, 00:11:11.101 "compare_and_write": false, 00:11:11.101 "abort": true, 00:11:11.101 "seek_hole": false, 00:11:11.101 "seek_data": false, 00:11:11.101 "copy": true, 00:11:11.101 "nvme_iov_md": false 00:11:11.101 }, 00:11:11.101 "memory_domains": [ 00:11:11.101 { 00:11:11.101 "dma_device_id": "system", 00:11:11.101 "dma_device_type": 1 00:11:11.101 }, 00:11:11.101 { 00:11:11.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.101 "dma_device_type": 2 00:11:11.101 } 00:11:11.101 ], 00:11:11.101 "driver_specific": {} 00:11:11.101 } 00:11:11.101 ] 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.101 "name": "Existed_Raid", 00:11:11.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.101 "strip_size_kb": 64, 00:11:11.101 "state": "configuring", 00:11:11.101 "raid_level": "concat", 00:11:11.101 "superblock": false, 00:11:11.101 "num_base_bdevs": 4, 00:11:11.101 "num_base_bdevs_discovered": 1, 00:11:11.101 "num_base_bdevs_operational": 4, 00:11:11.101 "base_bdevs_list": [ 00:11:11.101 { 00:11:11.101 "name": "BaseBdev1", 00:11:11.101 "uuid": "e2126f56-17e4-41bb-b44c-e7af3aec966a", 00:11:11.101 "is_configured": true, 00:11:11.101 "data_offset": 0, 00:11:11.101 "data_size": 65536 00:11:11.101 }, 00:11:11.101 { 00:11:11.101 "name": "BaseBdev2", 00:11:11.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.101 "is_configured": false, 00:11:11.101 "data_offset": 0, 00:11:11.101 "data_size": 0 00:11:11.101 }, 00:11:11.101 { 00:11:11.101 "name": "BaseBdev3", 00:11:11.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.101 "is_configured": false, 00:11:11.101 "data_offset": 0, 00:11:11.101 "data_size": 0 00:11:11.101 }, 00:11:11.101 { 00:11:11.101 "name": "BaseBdev4", 00:11:11.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.101 "is_configured": false, 00:11:11.101 "data_offset": 0, 00:11:11.101 "data_size": 0 00:11:11.101 } 00:11:11.101 ] 00:11:11.101 }' 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.101 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.668 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.668 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.668 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.668 [2024-11-26 20:24:04.975033] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.668 [2024-11-26 20:24:04.975108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:11.668 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.668 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.668 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.668 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.668 [2024-11-26 20:24:04.987092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.668 [2024-11-26 20:24:04.989369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.669 [2024-11-26 20:24:04.989435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.669 [2024-11-26 20:24:04.989447] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.669 [2024-11-26 20:24:04.989458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.669 [2024-11-26 20:24:04.989466] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.669 [2024-11-26 20:24:04.989476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.669 20:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.669 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.669 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.669 "name": "Existed_Raid", 00:11:11.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.669 "strip_size_kb": 64, 00:11:11.669 "state": "configuring", 00:11:11.669 "raid_level": "concat", 00:11:11.669 "superblock": false, 00:11:11.669 "num_base_bdevs": 4, 00:11:11.669 "num_base_bdevs_discovered": 1, 00:11:11.669 "num_base_bdevs_operational": 4, 00:11:11.669 "base_bdevs_list": [ 00:11:11.669 { 00:11:11.669 "name": "BaseBdev1", 00:11:11.669 "uuid": "e2126f56-17e4-41bb-b44c-e7af3aec966a", 00:11:11.669 "is_configured": true, 00:11:11.669 "data_offset": 0, 00:11:11.669 "data_size": 65536 00:11:11.669 }, 00:11:11.669 { 00:11:11.669 "name": "BaseBdev2", 00:11:11.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.669 "is_configured": false, 00:11:11.669 "data_offset": 0, 00:11:11.669 "data_size": 0 00:11:11.669 }, 00:11:11.669 { 00:11:11.669 "name": "BaseBdev3", 00:11:11.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.669 "is_configured": false, 00:11:11.669 "data_offset": 0, 00:11:11.669 "data_size": 0 00:11:11.669 }, 00:11:11.669 { 00:11:11.669 "name": "BaseBdev4", 00:11:11.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.669 "is_configured": false, 00:11:11.669 "data_offset": 0, 00:11:11.669 "data_size": 0 00:11:11.669 } 00:11:11.669 ] 00:11:11.669 }' 00:11:11.669 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.669 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.927 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:11.927 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.927 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.185 [2024-11-26 20:24:05.502533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.186 BaseBdev2 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.186 [ 00:11:12.186 { 00:11:12.186 "name": "BaseBdev2", 00:11:12.186 "aliases": [ 00:11:12.186 "f91dbce3-07c5-4c84-b1ea-f87257109ef9" 00:11:12.186 ], 00:11:12.186 "product_name": "Malloc disk", 00:11:12.186 "block_size": 512, 00:11:12.186 "num_blocks": 65536, 00:11:12.186 "uuid": "f91dbce3-07c5-4c84-b1ea-f87257109ef9", 00:11:12.186 "assigned_rate_limits": { 00:11:12.186 "rw_ios_per_sec": 0, 00:11:12.186 "rw_mbytes_per_sec": 0, 00:11:12.186 "r_mbytes_per_sec": 0, 00:11:12.186 "w_mbytes_per_sec": 0 00:11:12.186 }, 00:11:12.186 "claimed": true, 00:11:12.186 "claim_type": "exclusive_write", 00:11:12.186 "zoned": false, 00:11:12.186 "supported_io_types": { 00:11:12.186 "read": true, 00:11:12.186 "write": true, 00:11:12.186 "unmap": true, 00:11:12.186 "flush": true, 00:11:12.186 "reset": true, 00:11:12.186 "nvme_admin": false, 00:11:12.186 "nvme_io": false, 00:11:12.186 "nvme_io_md": false, 00:11:12.186 "write_zeroes": true, 00:11:12.186 "zcopy": true, 00:11:12.186 "get_zone_info": false, 00:11:12.186 "zone_management": false, 00:11:12.186 "zone_append": false, 00:11:12.186 "compare": false, 00:11:12.186 "compare_and_write": false, 00:11:12.186 "abort": true, 00:11:12.186 "seek_hole": false, 00:11:12.186 "seek_data": false, 00:11:12.186 "copy": true, 00:11:12.186 "nvme_iov_md": false 00:11:12.186 }, 00:11:12.186 "memory_domains": [ 00:11:12.186 { 00:11:12.186 "dma_device_id": "system", 00:11:12.186 "dma_device_type": 1 00:11:12.186 }, 00:11:12.186 { 00:11:12.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.186 "dma_device_type": 2 00:11:12.186 } 00:11:12.186 ], 00:11:12.186 "driver_specific": {} 00:11:12.186 } 00:11:12.186 ] 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.186 "name": "Existed_Raid", 00:11:12.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.186 "strip_size_kb": 64, 00:11:12.186 "state": "configuring", 00:11:12.186 "raid_level": "concat", 00:11:12.186 "superblock": false, 00:11:12.186 "num_base_bdevs": 4, 00:11:12.186 "num_base_bdevs_discovered": 2, 00:11:12.186 "num_base_bdevs_operational": 4, 00:11:12.186 "base_bdevs_list": [ 00:11:12.186 { 00:11:12.186 "name": "BaseBdev1", 00:11:12.186 "uuid": "e2126f56-17e4-41bb-b44c-e7af3aec966a", 00:11:12.186 "is_configured": true, 00:11:12.186 "data_offset": 0, 00:11:12.186 "data_size": 65536 00:11:12.186 }, 00:11:12.186 { 00:11:12.186 "name": "BaseBdev2", 00:11:12.186 "uuid": "f91dbce3-07c5-4c84-b1ea-f87257109ef9", 00:11:12.186 "is_configured": true, 00:11:12.186 "data_offset": 0, 00:11:12.186 "data_size": 65536 00:11:12.186 }, 00:11:12.186 { 00:11:12.186 "name": "BaseBdev3", 00:11:12.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.186 "is_configured": false, 00:11:12.186 "data_offset": 0, 00:11:12.186 "data_size": 0 00:11:12.186 }, 00:11:12.186 { 00:11:12.186 "name": "BaseBdev4", 00:11:12.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.186 "is_configured": false, 00:11:12.186 "data_offset": 0, 00:11:12.186 "data_size": 0 00:11:12.186 } 00:11:12.186 ] 00:11:12.186 }' 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.186 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.445 20:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.445 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.445 20:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.706 [2024-11-26 20:24:06.014273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.706 BaseBdev3 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.706 [ 00:11:12.706 { 00:11:12.706 "name": "BaseBdev3", 00:11:12.706 "aliases": [ 00:11:12.706 "f657ad5a-4ea5-44ac-9e5d-90c8e6f79f38" 00:11:12.706 ], 00:11:12.706 "product_name": "Malloc disk", 00:11:12.706 "block_size": 512, 00:11:12.706 "num_blocks": 65536, 00:11:12.706 "uuid": "f657ad5a-4ea5-44ac-9e5d-90c8e6f79f38", 00:11:12.706 "assigned_rate_limits": { 00:11:12.706 "rw_ios_per_sec": 0, 00:11:12.706 "rw_mbytes_per_sec": 0, 00:11:12.706 "r_mbytes_per_sec": 0, 00:11:12.706 "w_mbytes_per_sec": 0 00:11:12.706 }, 00:11:12.706 "claimed": true, 00:11:12.706 "claim_type": "exclusive_write", 00:11:12.706 "zoned": false, 00:11:12.706 "supported_io_types": { 00:11:12.706 "read": true, 00:11:12.706 "write": true, 00:11:12.706 "unmap": true, 00:11:12.706 "flush": true, 00:11:12.706 "reset": true, 00:11:12.706 "nvme_admin": false, 00:11:12.706 "nvme_io": false, 00:11:12.706 "nvme_io_md": false, 00:11:12.706 "write_zeroes": true, 00:11:12.706 "zcopy": true, 00:11:12.706 "get_zone_info": false, 00:11:12.706 "zone_management": false, 00:11:12.706 "zone_append": false, 00:11:12.706 "compare": false, 00:11:12.706 "compare_and_write": false, 00:11:12.706 "abort": true, 00:11:12.706 "seek_hole": false, 00:11:12.706 "seek_data": false, 00:11:12.706 "copy": true, 00:11:12.706 "nvme_iov_md": false 00:11:12.706 }, 00:11:12.706 "memory_domains": [ 00:11:12.706 { 00:11:12.706 "dma_device_id": "system", 00:11:12.706 "dma_device_type": 1 00:11:12.706 }, 00:11:12.706 { 00:11:12.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.706 "dma_device_type": 2 00:11:12.706 } 00:11:12.706 ], 00:11:12.706 "driver_specific": {} 00:11:12.706 } 00:11:12.706 ] 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.706 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.706 "name": "Existed_Raid", 00:11:12.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.706 "strip_size_kb": 64, 00:11:12.706 "state": "configuring", 00:11:12.706 "raid_level": "concat", 00:11:12.706 "superblock": false, 00:11:12.706 "num_base_bdevs": 4, 00:11:12.706 "num_base_bdevs_discovered": 3, 00:11:12.706 "num_base_bdevs_operational": 4, 00:11:12.706 "base_bdevs_list": [ 00:11:12.706 { 00:11:12.706 "name": "BaseBdev1", 00:11:12.706 "uuid": "e2126f56-17e4-41bb-b44c-e7af3aec966a", 00:11:12.706 "is_configured": true, 00:11:12.706 "data_offset": 0, 00:11:12.706 "data_size": 65536 00:11:12.706 }, 00:11:12.706 { 00:11:12.706 "name": "BaseBdev2", 00:11:12.706 "uuid": "f91dbce3-07c5-4c84-b1ea-f87257109ef9", 00:11:12.706 "is_configured": true, 00:11:12.706 "data_offset": 0, 00:11:12.706 "data_size": 65536 00:11:12.706 }, 00:11:12.706 { 00:11:12.706 "name": "BaseBdev3", 00:11:12.706 "uuid": "f657ad5a-4ea5-44ac-9e5d-90c8e6f79f38", 00:11:12.706 "is_configured": true, 00:11:12.706 "data_offset": 0, 00:11:12.706 "data_size": 65536 00:11:12.706 }, 00:11:12.706 { 00:11:12.706 "name": "BaseBdev4", 00:11:12.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.706 "is_configured": false, 00:11:12.706 "data_offset": 0, 00:11:12.706 "data_size": 0 00:11:12.706 } 00:11:12.706 ] 00:11:12.706 }' 00:11:12.707 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.707 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.274 [2024-11-26 20:24:06.533094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.274 [2024-11-26 20:24:06.533257] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:13.274 [2024-11-26 20:24:06.533289] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:13.274 [2024-11-26 20:24:06.533668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:13.274 [2024-11-26 20:24:06.533873] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:13.274 [2024-11-26 20:24:06.533922] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:13.274 [2024-11-26 20:24:06.534186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.274 BaseBdev4 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.274 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.274 [ 00:11:13.274 { 00:11:13.274 "name": "BaseBdev4", 00:11:13.274 "aliases": [ 00:11:13.274 "04f77004-e3bf-4cf8-8dee-b435b815b70c" 00:11:13.274 ], 00:11:13.274 "product_name": "Malloc disk", 00:11:13.274 "block_size": 512, 00:11:13.274 "num_blocks": 65536, 00:11:13.274 "uuid": "04f77004-e3bf-4cf8-8dee-b435b815b70c", 00:11:13.274 "assigned_rate_limits": { 00:11:13.274 "rw_ios_per_sec": 0, 00:11:13.275 "rw_mbytes_per_sec": 0, 00:11:13.275 "r_mbytes_per_sec": 0, 00:11:13.275 "w_mbytes_per_sec": 0 00:11:13.275 }, 00:11:13.275 "claimed": true, 00:11:13.275 "claim_type": "exclusive_write", 00:11:13.275 "zoned": false, 00:11:13.275 "supported_io_types": { 00:11:13.275 "read": true, 00:11:13.275 "write": true, 00:11:13.275 "unmap": true, 00:11:13.275 "flush": true, 00:11:13.275 "reset": true, 00:11:13.275 "nvme_admin": false, 00:11:13.275 "nvme_io": false, 00:11:13.275 "nvme_io_md": false, 00:11:13.275 "write_zeroes": true, 00:11:13.275 "zcopy": true, 00:11:13.275 "get_zone_info": false, 00:11:13.275 "zone_management": false, 00:11:13.275 "zone_append": false, 00:11:13.275 "compare": false, 00:11:13.275 "compare_and_write": false, 00:11:13.275 "abort": true, 00:11:13.275 "seek_hole": false, 00:11:13.275 "seek_data": false, 00:11:13.275 "copy": true, 00:11:13.275 "nvme_iov_md": false 00:11:13.275 }, 00:11:13.275 "memory_domains": [ 00:11:13.275 { 00:11:13.275 "dma_device_id": "system", 00:11:13.275 "dma_device_type": 1 00:11:13.275 }, 00:11:13.275 { 00:11:13.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.275 "dma_device_type": 2 00:11:13.275 } 00:11:13.275 ], 00:11:13.275 "driver_specific": {} 00:11:13.275 } 00:11:13.275 ] 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.275 "name": "Existed_Raid", 00:11:13.275 "uuid": "88ec51e9-ea02-43d1-80e0-4722c7ceb184", 00:11:13.275 "strip_size_kb": 64, 00:11:13.275 "state": "online", 00:11:13.275 "raid_level": "concat", 00:11:13.275 "superblock": false, 00:11:13.275 "num_base_bdevs": 4, 00:11:13.275 "num_base_bdevs_discovered": 4, 00:11:13.275 "num_base_bdevs_operational": 4, 00:11:13.275 "base_bdevs_list": [ 00:11:13.275 { 00:11:13.275 "name": "BaseBdev1", 00:11:13.275 "uuid": "e2126f56-17e4-41bb-b44c-e7af3aec966a", 00:11:13.275 "is_configured": true, 00:11:13.275 "data_offset": 0, 00:11:13.275 "data_size": 65536 00:11:13.275 }, 00:11:13.275 { 00:11:13.275 "name": "BaseBdev2", 00:11:13.275 "uuid": "f91dbce3-07c5-4c84-b1ea-f87257109ef9", 00:11:13.275 "is_configured": true, 00:11:13.275 "data_offset": 0, 00:11:13.275 "data_size": 65536 00:11:13.275 }, 00:11:13.275 { 00:11:13.275 "name": "BaseBdev3", 00:11:13.275 "uuid": "f657ad5a-4ea5-44ac-9e5d-90c8e6f79f38", 00:11:13.275 "is_configured": true, 00:11:13.275 "data_offset": 0, 00:11:13.275 "data_size": 65536 00:11:13.275 }, 00:11:13.275 { 00:11:13.275 "name": "BaseBdev4", 00:11:13.275 "uuid": "04f77004-e3bf-4cf8-8dee-b435b815b70c", 00:11:13.275 "is_configured": true, 00:11:13.275 "data_offset": 0, 00:11:13.275 "data_size": 65536 00:11:13.275 } 00:11:13.275 ] 00:11:13.275 }' 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.275 20:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.841 [2024-11-26 20:24:07.108874] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.841 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.841 "name": "Existed_Raid", 00:11:13.841 "aliases": [ 00:11:13.841 "88ec51e9-ea02-43d1-80e0-4722c7ceb184" 00:11:13.841 ], 00:11:13.841 "product_name": "Raid Volume", 00:11:13.841 "block_size": 512, 00:11:13.841 "num_blocks": 262144, 00:11:13.841 "uuid": "88ec51e9-ea02-43d1-80e0-4722c7ceb184", 00:11:13.841 "assigned_rate_limits": { 00:11:13.841 "rw_ios_per_sec": 0, 00:11:13.841 "rw_mbytes_per_sec": 0, 00:11:13.841 "r_mbytes_per_sec": 0, 00:11:13.841 "w_mbytes_per_sec": 0 00:11:13.841 }, 00:11:13.841 "claimed": false, 00:11:13.841 "zoned": false, 00:11:13.841 "supported_io_types": { 00:11:13.841 "read": true, 00:11:13.841 "write": true, 00:11:13.841 "unmap": true, 00:11:13.841 "flush": true, 00:11:13.841 "reset": true, 00:11:13.841 "nvme_admin": false, 00:11:13.841 "nvme_io": false, 00:11:13.841 "nvme_io_md": false, 00:11:13.841 "write_zeroes": true, 00:11:13.841 "zcopy": false, 00:11:13.841 "get_zone_info": false, 00:11:13.841 "zone_management": false, 00:11:13.841 "zone_append": false, 00:11:13.841 "compare": false, 00:11:13.841 "compare_and_write": false, 00:11:13.841 "abort": false, 00:11:13.841 "seek_hole": false, 00:11:13.841 "seek_data": false, 00:11:13.841 "copy": false, 00:11:13.841 "nvme_iov_md": false 00:11:13.841 }, 00:11:13.841 "memory_domains": [ 00:11:13.841 { 00:11:13.841 "dma_device_id": "system", 00:11:13.841 "dma_device_type": 1 00:11:13.841 }, 00:11:13.841 { 00:11:13.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.841 "dma_device_type": 2 00:11:13.841 }, 00:11:13.841 { 00:11:13.841 "dma_device_id": "system", 00:11:13.841 "dma_device_type": 1 00:11:13.841 }, 00:11:13.841 { 00:11:13.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.841 "dma_device_type": 2 00:11:13.841 }, 00:11:13.841 { 00:11:13.841 "dma_device_id": "system", 00:11:13.841 "dma_device_type": 1 00:11:13.841 }, 00:11:13.841 { 00:11:13.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.841 "dma_device_type": 2 00:11:13.841 }, 00:11:13.841 { 00:11:13.841 "dma_device_id": "system", 00:11:13.841 "dma_device_type": 1 00:11:13.841 }, 00:11:13.841 { 00:11:13.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.841 "dma_device_type": 2 00:11:13.841 } 00:11:13.841 ], 00:11:13.841 "driver_specific": { 00:11:13.841 "raid": { 00:11:13.841 "uuid": "88ec51e9-ea02-43d1-80e0-4722c7ceb184", 00:11:13.841 "strip_size_kb": 64, 00:11:13.841 "state": "online", 00:11:13.841 "raid_level": "concat", 00:11:13.841 "superblock": false, 00:11:13.841 "num_base_bdevs": 4, 00:11:13.841 "num_base_bdevs_discovered": 4, 00:11:13.841 "num_base_bdevs_operational": 4, 00:11:13.841 "base_bdevs_list": [ 00:11:13.841 { 00:11:13.841 "name": "BaseBdev1", 00:11:13.841 "uuid": "e2126f56-17e4-41bb-b44c-e7af3aec966a", 00:11:13.842 "is_configured": true, 00:11:13.842 "data_offset": 0, 00:11:13.842 "data_size": 65536 00:11:13.842 }, 00:11:13.842 { 00:11:13.842 "name": "BaseBdev2", 00:11:13.842 "uuid": "f91dbce3-07c5-4c84-b1ea-f87257109ef9", 00:11:13.842 "is_configured": true, 00:11:13.842 "data_offset": 0, 00:11:13.842 "data_size": 65536 00:11:13.842 }, 00:11:13.842 { 00:11:13.842 "name": "BaseBdev3", 00:11:13.842 "uuid": "f657ad5a-4ea5-44ac-9e5d-90c8e6f79f38", 00:11:13.842 "is_configured": true, 00:11:13.842 "data_offset": 0, 00:11:13.842 "data_size": 65536 00:11:13.842 }, 00:11:13.842 { 00:11:13.842 "name": "BaseBdev4", 00:11:13.842 "uuid": "04f77004-e3bf-4cf8-8dee-b435b815b70c", 00:11:13.842 "is_configured": true, 00:11:13.842 "data_offset": 0, 00:11:13.842 "data_size": 65536 00:11:13.842 } 00:11:13.842 ] 00:11:13.842 } 00:11:13.842 } 00:11:13.842 }' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:13.842 BaseBdev2 00:11:13.842 BaseBdev3 00:11:13.842 BaseBdev4' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.842 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.101 [2024-11-26 20:24:07.407995] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.101 [2024-11-26 20:24:07.408036] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.101 [2024-11-26 20:24:07.408111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.101 "name": "Existed_Raid", 00:11:14.101 "uuid": "88ec51e9-ea02-43d1-80e0-4722c7ceb184", 00:11:14.101 "strip_size_kb": 64, 00:11:14.101 "state": "offline", 00:11:14.101 "raid_level": "concat", 00:11:14.101 "superblock": false, 00:11:14.101 "num_base_bdevs": 4, 00:11:14.101 "num_base_bdevs_discovered": 3, 00:11:14.101 "num_base_bdevs_operational": 3, 00:11:14.101 "base_bdevs_list": [ 00:11:14.101 { 00:11:14.101 "name": null, 00:11:14.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.101 "is_configured": false, 00:11:14.101 "data_offset": 0, 00:11:14.101 "data_size": 65536 00:11:14.101 }, 00:11:14.101 { 00:11:14.101 "name": "BaseBdev2", 00:11:14.101 "uuid": "f91dbce3-07c5-4c84-b1ea-f87257109ef9", 00:11:14.101 "is_configured": true, 00:11:14.101 "data_offset": 0, 00:11:14.101 "data_size": 65536 00:11:14.101 }, 00:11:14.101 { 00:11:14.101 "name": "BaseBdev3", 00:11:14.101 "uuid": "f657ad5a-4ea5-44ac-9e5d-90c8e6f79f38", 00:11:14.101 "is_configured": true, 00:11:14.101 "data_offset": 0, 00:11:14.101 "data_size": 65536 00:11:14.101 }, 00:11:14.101 { 00:11:14.101 "name": "BaseBdev4", 00:11:14.101 "uuid": "04f77004-e3bf-4cf8-8dee-b435b815b70c", 00:11:14.101 "is_configured": true, 00:11:14.101 "data_offset": 0, 00:11:14.101 "data_size": 65536 00:11:14.101 } 00:11:14.101 ] 00:11:14.101 }' 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.101 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.359 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:14.359 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.359 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.359 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.359 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.359 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.359 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.617 [2024-11-26 20:24:07.944769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.617 20:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.617 [2024-11-26 20:24:08.016308] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.617 [2024-11-26 20:24:08.104813] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:14.617 [2024-11-26 20:24:08.104941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.617 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.875 BaseBdev2 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.875 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.875 [ 00:11:14.875 { 00:11:14.875 "name": "BaseBdev2", 00:11:14.875 "aliases": [ 00:11:14.875 "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0" 00:11:14.875 ], 00:11:14.875 "product_name": "Malloc disk", 00:11:14.875 "block_size": 512, 00:11:14.875 "num_blocks": 65536, 00:11:14.875 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:14.875 "assigned_rate_limits": { 00:11:14.875 "rw_ios_per_sec": 0, 00:11:14.875 "rw_mbytes_per_sec": 0, 00:11:14.875 "r_mbytes_per_sec": 0, 00:11:14.875 "w_mbytes_per_sec": 0 00:11:14.875 }, 00:11:14.875 "claimed": false, 00:11:14.875 "zoned": false, 00:11:14.875 "supported_io_types": { 00:11:14.875 "read": true, 00:11:14.875 "write": true, 00:11:14.875 "unmap": true, 00:11:14.875 "flush": true, 00:11:14.875 "reset": true, 00:11:14.875 "nvme_admin": false, 00:11:14.875 "nvme_io": false, 00:11:14.875 "nvme_io_md": false, 00:11:14.875 "write_zeroes": true, 00:11:14.875 "zcopy": true, 00:11:14.875 "get_zone_info": false, 00:11:14.875 "zone_management": false, 00:11:14.875 "zone_append": false, 00:11:14.875 "compare": false, 00:11:14.875 "compare_and_write": false, 00:11:14.875 "abort": true, 00:11:14.875 "seek_hole": false, 00:11:14.875 "seek_data": false, 00:11:14.875 "copy": true, 00:11:14.875 "nvme_iov_md": false 00:11:14.875 }, 00:11:14.875 "memory_domains": [ 00:11:14.875 { 00:11:14.875 "dma_device_id": "system", 00:11:14.875 "dma_device_type": 1 00:11:14.875 }, 00:11:14.875 { 00:11:14.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.876 "dma_device_type": 2 00:11:14.876 } 00:11:14.876 ], 00:11:14.876 "driver_specific": {} 00:11:14.876 } 00:11:14.876 ] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 BaseBdev3 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 [ 00:11:14.876 { 00:11:14.876 "name": "BaseBdev3", 00:11:14.876 "aliases": [ 00:11:14.876 "0980955e-5028-49ed-98af-9b57f8e0a7c8" 00:11:14.876 ], 00:11:14.876 "product_name": "Malloc disk", 00:11:14.876 "block_size": 512, 00:11:14.876 "num_blocks": 65536, 00:11:14.876 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:14.876 "assigned_rate_limits": { 00:11:14.876 "rw_ios_per_sec": 0, 00:11:14.876 "rw_mbytes_per_sec": 0, 00:11:14.876 "r_mbytes_per_sec": 0, 00:11:14.876 "w_mbytes_per_sec": 0 00:11:14.876 }, 00:11:14.876 "claimed": false, 00:11:14.876 "zoned": false, 00:11:14.876 "supported_io_types": { 00:11:14.876 "read": true, 00:11:14.876 "write": true, 00:11:14.876 "unmap": true, 00:11:14.876 "flush": true, 00:11:14.876 "reset": true, 00:11:14.876 "nvme_admin": false, 00:11:14.876 "nvme_io": false, 00:11:14.876 "nvme_io_md": false, 00:11:14.876 "write_zeroes": true, 00:11:14.876 "zcopy": true, 00:11:14.876 "get_zone_info": false, 00:11:14.876 "zone_management": false, 00:11:14.876 "zone_append": false, 00:11:14.876 "compare": false, 00:11:14.876 "compare_and_write": false, 00:11:14.876 "abort": true, 00:11:14.876 "seek_hole": false, 00:11:14.876 "seek_data": false, 00:11:14.876 "copy": true, 00:11:14.876 "nvme_iov_md": false 00:11:14.876 }, 00:11:14.876 "memory_domains": [ 00:11:14.876 { 00:11:14.876 "dma_device_id": "system", 00:11:14.876 "dma_device_type": 1 00:11:14.876 }, 00:11:14.876 { 00:11:14.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.876 "dma_device_type": 2 00:11:14.876 } 00:11:14.876 ], 00:11:14.876 "driver_specific": {} 00:11:14.876 } 00:11:14.876 ] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 BaseBdev4 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 [ 00:11:14.876 { 00:11:14.876 "name": "BaseBdev4", 00:11:14.876 "aliases": [ 00:11:14.876 "55b71f92-b643-423a-b5f7-3cfe2c62909a" 00:11:14.876 ], 00:11:14.876 "product_name": "Malloc disk", 00:11:14.876 "block_size": 512, 00:11:14.876 "num_blocks": 65536, 00:11:14.876 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:14.876 "assigned_rate_limits": { 00:11:14.876 "rw_ios_per_sec": 0, 00:11:14.876 "rw_mbytes_per_sec": 0, 00:11:14.876 "r_mbytes_per_sec": 0, 00:11:14.876 "w_mbytes_per_sec": 0 00:11:14.876 }, 00:11:14.876 "claimed": false, 00:11:14.876 "zoned": false, 00:11:14.876 "supported_io_types": { 00:11:14.876 "read": true, 00:11:14.876 "write": true, 00:11:14.876 "unmap": true, 00:11:14.876 "flush": true, 00:11:14.876 "reset": true, 00:11:14.876 "nvme_admin": false, 00:11:14.876 "nvme_io": false, 00:11:14.876 "nvme_io_md": false, 00:11:14.876 "write_zeroes": true, 00:11:14.876 "zcopy": true, 00:11:14.876 "get_zone_info": false, 00:11:14.876 "zone_management": false, 00:11:14.876 "zone_append": false, 00:11:14.876 "compare": false, 00:11:14.876 "compare_and_write": false, 00:11:14.876 "abort": true, 00:11:14.876 "seek_hole": false, 00:11:14.876 "seek_data": false, 00:11:14.876 "copy": true, 00:11:14.876 "nvme_iov_md": false 00:11:14.876 }, 00:11:14.876 "memory_domains": [ 00:11:14.876 { 00:11:14.876 "dma_device_id": "system", 00:11:14.876 "dma_device_type": 1 00:11:14.876 }, 00:11:14.876 { 00:11:14.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.876 "dma_device_type": 2 00:11:14.876 } 00:11:14.876 ], 00:11:14.876 "driver_specific": {} 00:11:14.876 } 00:11:14.876 ] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.876 [2024-11-26 20:24:08.354086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.876 [2024-11-26 20:24:08.354210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.876 [2024-11-26 20:24:08.354270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.876 [2024-11-26 20:24:08.356527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.876 [2024-11-26 20:24:08.356659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.876 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.877 "name": "Existed_Raid", 00:11:14.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.877 "strip_size_kb": 64, 00:11:14.877 "state": "configuring", 00:11:14.877 "raid_level": "concat", 00:11:14.877 "superblock": false, 00:11:14.877 "num_base_bdevs": 4, 00:11:14.877 "num_base_bdevs_discovered": 3, 00:11:14.877 "num_base_bdevs_operational": 4, 00:11:14.877 "base_bdevs_list": [ 00:11:14.877 { 00:11:14.877 "name": "BaseBdev1", 00:11:14.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.877 "is_configured": false, 00:11:14.877 "data_offset": 0, 00:11:14.877 "data_size": 0 00:11:14.877 }, 00:11:14.877 { 00:11:14.877 "name": "BaseBdev2", 00:11:14.877 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:14.877 "is_configured": true, 00:11:14.877 "data_offset": 0, 00:11:14.877 "data_size": 65536 00:11:14.877 }, 00:11:14.877 { 00:11:14.877 "name": "BaseBdev3", 00:11:14.877 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:14.877 "is_configured": true, 00:11:14.877 "data_offset": 0, 00:11:14.877 "data_size": 65536 00:11:14.877 }, 00:11:14.877 { 00:11:14.877 "name": "BaseBdev4", 00:11:14.877 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:14.877 "is_configured": true, 00:11:14.877 "data_offset": 0, 00:11:14.877 "data_size": 65536 00:11:14.877 } 00:11:14.877 ] 00:11:14.877 }' 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.877 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.441 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:15.441 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.441 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.441 [2024-11-26 20:24:08.789412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.441 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.442 "name": "Existed_Raid", 00:11:15.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.442 "strip_size_kb": 64, 00:11:15.442 "state": "configuring", 00:11:15.442 "raid_level": "concat", 00:11:15.442 "superblock": false, 00:11:15.442 "num_base_bdevs": 4, 00:11:15.442 "num_base_bdevs_discovered": 2, 00:11:15.442 "num_base_bdevs_operational": 4, 00:11:15.442 "base_bdevs_list": [ 00:11:15.442 { 00:11:15.442 "name": "BaseBdev1", 00:11:15.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.442 "is_configured": false, 00:11:15.442 "data_offset": 0, 00:11:15.442 "data_size": 0 00:11:15.442 }, 00:11:15.442 { 00:11:15.442 "name": null, 00:11:15.442 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:15.442 "is_configured": false, 00:11:15.442 "data_offset": 0, 00:11:15.442 "data_size": 65536 00:11:15.442 }, 00:11:15.442 { 00:11:15.442 "name": "BaseBdev3", 00:11:15.442 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:15.442 "is_configured": true, 00:11:15.442 "data_offset": 0, 00:11:15.442 "data_size": 65536 00:11:15.442 }, 00:11:15.442 { 00:11:15.442 "name": "BaseBdev4", 00:11:15.442 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:15.442 "is_configured": true, 00:11:15.442 "data_offset": 0, 00:11:15.442 "data_size": 65536 00:11:15.442 } 00:11:15.442 ] 00:11:15.442 }' 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.442 20:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.700 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.700 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.700 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.958 [2024-11-26 20:24:09.310283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.958 BaseBdev1 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.958 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.958 [ 00:11:15.958 { 00:11:15.958 "name": "BaseBdev1", 00:11:15.958 "aliases": [ 00:11:15.959 "3154b7a7-7dec-4952-b776-4713538442a0" 00:11:15.959 ], 00:11:15.959 "product_name": "Malloc disk", 00:11:15.959 "block_size": 512, 00:11:15.959 "num_blocks": 65536, 00:11:15.959 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:15.959 "assigned_rate_limits": { 00:11:15.959 "rw_ios_per_sec": 0, 00:11:15.959 "rw_mbytes_per_sec": 0, 00:11:15.959 "r_mbytes_per_sec": 0, 00:11:15.959 "w_mbytes_per_sec": 0 00:11:15.959 }, 00:11:15.959 "claimed": true, 00:11:15.959 "claim_type": "exclusive_write", 00:11:15.959 "zoned": false, 00:11:15.959 "supported_io_types": { 00:11:15.959 "read": true, 00:11:15.959 "write": true, 00:11:15.959 "unmap": true, 00:11:15.959 "flush": true, 00:11:15.959 "reset": true, 00:11:15.959 "nvme_admin": false, 00:11:15.959 "nvme_io": false, 00:11:15.959 "nvme_io_md": false, 00:11:15.959 "write_zeroes": true, 00:11:15.959 "zcopy": true, 00:11:15.959 "get_zone_info": false, 00:11:15.959 "zone_management": false, 00:11:15.959 "zone_append": false, 00:11:15.959 "compare": false, 00:11:15.959 "compare_and_write": false, 00:11:15.959 "abort": true, 00:11:15.959 "seek_hole": false, 00:11:15.959 "seek_data": false, 00:11:15.959 "copy": true, 00:11:15.959 "nvme_iov_md": false 00:11:15.959 }, 00:11:15.959 "memory_domains": [ 00:11:15.959 { 00:11:15.959 "dma_device_id": "system", 00:11:15.959 "dma_device_type": 1 00:11:15.959 }, 00:11:15.959 { 00:11:15.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.959 "dma_device_type": 2 00:11:15.959 } 00:11:15.959 ], 00:11:15.959 "driver_specific": {} 00:11:15.959 } 00:11:15.959 ] 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.959 "name": "Existed_Raid", 00:11:15.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.959 "strip_size_kb": 64, 00:11:15.959 "state": "configuring", 00:11:15.959 "raid_level": "concat", 00:11:15.959 "superblock": false, 00:11:15.959 "num_base_bdevs": 4, 00:11:15.959 "num_base_bdevs_discovered": 3, 00:11:15.959 "num_base_bdevs_operational": 4, 00:11:15.959 "base_bdevs_list": [ 00:11:15.959 { 00:11:15.959 "name": "BaseBdev1", 00:11:15.959 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:15.959 "is_configured": true, 00:11:15.959 "data_offset": 0, 00:11:15.959 "data_size": 65536 00:11:15.959 }, 00:11:15.959 { 00:11:15.959 "name": null, 00:11:15.959 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:15.959 "is_configured": false, 00:11:15.959 "data_offset": 0, 00:11:15.959 "data_size": 65536 00:11:15.959 }, 00:11:15.959 { 00:11:15.959 "name": "BaseBdev3", 00:11:15.959 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:15.959 "is_configured": true, 00:11:15.959 "data_offset": 0, 00:11:15.959 "data_size": 65536 00:11:15.959 }, 00:11:15.959 { 00:11:15.959 "name": "BaseBdev4", 00:11:15.959 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:15.959 "is_configured": true, 00:11:15.959 "data_offset": 0, 00:11:15.959 "data_size": 65536 00:11:15.959 } 00:11:15.959 ] 00:11:15.959 }' 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.959 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.527 [2024-11-26 20:24:09.865448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.527 "name": "Existed_Raid", 00:11:16.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.527 "strip_size_kb": 64, 00:11:16.527 "state": "configuring", 00:11:16.527 "raid_level": "concat", 00:11:16.527 "superblock": false, 00:11:16.527 "num_base_bdevs": 4, 00:11:16.527 "num_base_bdevs_discovered": 2, 00:11:16.527 "num_base_bdevs_operational": 4, 00:11:16.527 "base_bdevs_list": [ 00:11:16.527 { 00:11:16.527 "name": "BaseBdev1", 00:11:16.527 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:16.527 "is_configured": true, 00:11:16.527 "data_offset": 0, 00:11:16.527 "data_size": 65536 00:11:16.527 }, 00:11:16.527 { 00:11:16.527 "name": null, 00:11:16.527 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:16.527 "is_configured": false, 00:11:16.527 "data_offset": 0, 00:11:16.527 "data_size": 65536 00:11:16.527 }, 00:11:16.527 { 00:11:16.527 "name": null, 00:11:16.527 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:16.527 "is_configured": false, 00:11:16.527 "data_offset": 0, 00:11:16.527 "data_size": 65536 00:11:16.527 }, 00:11:16.527 { 00:11:16.527 "name": "BaseBdev4", 00:11:16.527 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:16.527 "is_configured": true, 00:11:16.527 "data_offset": 0, 00:11:16.527 "data_size": 65536 00:11:16.527 } 00:11:16.527 ] 00:11:16.527 }' 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.527 20:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.786 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.786 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.786 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.786 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:16.786 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.046 [2024-11-26 20:24:10.372830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.046 "name": "Existed_Raid", 00:11:17.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.046 "strip_size_kb": 64, 00:11:17.046 "state": "configuring", 00:11:17.046 "raid_level": "concat", 00:11:17.046 "superblock": false, 00:11:17.046 "num_base_bdevs": 4, 00:11:17.046 "num_base_bdevs_discovered": 3, 00:11:17.046 "num_base_bdevs_operational": 4, 00:11:17.046 "base_bdevs_list": [ 00:11:17.046 { 00:11:17.046 "name": "BaseBdev1", 00:11:17.046 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:17.046 "is_configured": true, 00:11:17.046 "data_offset": 0, 00:11:17.046 "data_size": 65536 00:11:17.046 }, 00:11:17.046 { 00:11:17.046 "name": null, 00:11:17.046 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:17.046 "is_configured": false, 00:11:17.046 "data_offset": 0, 00:11:17.046 "data_size": 65536 00:11:17.046 }, 00:11:17.046 { 00:11:17.046 "name": "BaseBdev3", 00:11:17.046 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:17.046 "is_configured": true, 00:11:17.046 "data_offset": 0, 00:11:17.046 "data_size": 65536 00:11:17.046 }, 00:11:17.046 { 00:11:17.046 "name": "BaseBdev4", 00:11:17.046 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:17.046 "is_configured": true, 00:11:17.046 "data_offset": 0, 00:11:17.046 "data_size": 65536 00:11:17.046 } 00:11:17.046 ] 00:11:17.046 }' 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.046 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.614 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.614 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.614 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.614 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.615 [2024-11-26 20:24:10.904809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.615 "name": "Existed_Raid", 00:11:17.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.615 "strip_size_kb": 64, 00:11:17.615 "state": "configuring", 00:11:17.615 "raid_level": "concat", 00:11:17.615 "superblock": false, 00:11:17.615 "num_base_bdevs": 4, 00:11:17.615 "num_base_bdevs_discovered": 2, 00:11:17.615 "num_base_bdevs_operational": 4, 00:11:17.615 "base_bdevs_list": [ 00:11:17.615 { 00:11:17.615 "name": null, 00:11:17.615 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:17.615 "is_configured": false, 00:11:17.615 "data_offset": 0, 00:11:17.615 "data_size": 65536 00:11:17.615 }, 00:11:17.615 { 00:11:17.615 "name": null, 00:11:17.615 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:17.615 "is_configured": false, 00:11:17.615 "data_offset": 0, 00:11:17.615 "data_size": 65536 00:11:17.615 }, 00:11:17.615 { 00:11:17.615 "name": "BaseBdev3", 00:11:17.615 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:17.615 "is_configured": true, 00:11:17.615 "data_offset": 0, 00:11:17.615 "data_size": 65536 00:11:17.615 }, 00:11:17.615 { 00:11:17.615 "name": "BaseBdev4", 00:11:17.615 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:17.615 "is_configured": true, 00:11:17.615 "data_offset": 0, 00:11:17.615 "data_size": 65536 00:11:17.615 } 00:11:17.615 ] 00:11:17.615 }' 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.615 20:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.873 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.873 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.873 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.873 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.873 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.873 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:17.873 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.874 [2024-11-26 20:24:11.405207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.874 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.133 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.133 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.133 "name": "Existed_Raid", 00:11:18.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.133 "strip_size_kb": 64, 00:11:18.133 "state": "configuring", 00:11:18.133 "raid_level": "concat", 00:11:18.133 "superblock": false, 00:11:18.133 "num_base_bdevs": 4, 00:11:18.133 "num_base_bdevs_discovered": 3, 00:11:18.133 "num_base_bdevs_operational": 4, 00:11:18.133 "base_bdevs_list": [ 00:11:18.133 { 00:11:18.133 "name": null, 00:11:18.133 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:18.133 "is_configured": false, 00:11:18.133 "data_offset": 0, 00:11:18.133 "data_size": 65536 00:11:18.133 }, 00:11:18.133 { 00:11:18.133 "name": "BaseBdev2", 00:11:18.133 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:18.133 "is_configured": true, 00:11:18.133 "data_offset": 0, 00:11:18.133 "data_size": 65536 00:11:18.133 }, 00:11:18.133 { 00:11:18.133 "name": "BaseBdev3", 00:11:18.133 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:18.133 "is_configured": true, 00:11:18.133 "data_offset": 0, 00:11:18.133 "data_size": 65536 00:11:18.133 }, 00:11:18.133 { 00:11:18.133 "name": "BaseBdev4", 00:11:18.133 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:18.133 "is_configured": true, 00:11:18.133 "data_offset": 0, 00:11:18.133 "data_size": 65536 00:11:18.133 } 00:11:18.133 ] 00:11:18.133 }' 00:11:18.133 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.133 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3154b7a7-7dec-4952-b776-4713538442a0 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.392 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.650 [2024-11-26 20:24:11.950209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:18.650 [2024-11-26 20:24:11.950361] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:18.650 [2024-11-26 20:24:11.950376] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:18.650 [2024-11-26 20:24:11.950698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:18.650 [2024-11-26 20:24:11.950845] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:18.650 [2024-11-26 20:24:11.950861] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:18.650 [2024-11-26 20:24:11.951067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.650 NewBaseBdev 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.650 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.650 [ 00:11:18.650 { 00:11:18.650 "name": "NewBaseBdev", 00:11:18.650 "aliases": [ 00:11:18.650 "3154b7a7-7dec-4952-b776-4713538442a0" 00:11:18.650 ], 00:11:18.650 "product_name": "Malloc disk", 00:11:18.650 "block_size": 512, 00:11:18.650 "num_blocks": 65536, 00:11:18.650 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:18.650 "assigned_rate_limits": { 00:11:18.650 "rw_ios_per_sec": 0, 00:11:18.650 "rw_mbytes_per_sec": 0, 00:11:18.650 "r_mbytes_per_sec": 0, 00:11:18.650 "w_mbytes_per_sec": 0 00:11:18.650 }, 00:11:18.650 "claimed": true, 00:11:18.650 "claim_type": "exclusive_write", 00:11:18.650 "zoned": false, 00:11:18.651 "supported_io_types": { 00:11:18.651 "read": true, 00:11:18.651 "write": true, 00:11:18.651 "unmap": true, 00:11:18.651 "flush": true, 00:11:18.651 "reset": true, 00:11:18.651 "nvme_admin": false, 00:11:18.651 "nvme_io": false, 00:11:18.651 "nvme_io_md": false, 00:11:18.651 "write_zeroes": true, 00:11:18.651 "zcopy": true, 00:11:18.651 "get_zone_info": false, 00:11:18.651 "zone_management": false, 00:11:18.651 "zone_append": false, 00:11:18.651 "compare": false, 00:11:18.651 "compare_and_write": false, 00:11:18.651 "abort": true, 00:11:18.651 "seek_hole": false, 00:11:18.651 "seek_data": false, 00:11:18.651 "copy": true, 00:11:18.651 "nvme_iov_md": false 00:11:18.651 }, 00:11:18.651 "memory_domains": [ 00:11:18.651 { 00:11:18.651 "dma_device_id": "system", 00:11:18.651 "dma_device_type": 1 00:11:18.651 }, 00:11:18.651 { 00:11:18.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.651 "dma_device_type": 2 00:11:18.651 } 00:11:18.651 ], 00:11:18.651 "driver_specific": {} 00:11:18.651 } 00:11:18.651 ] 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.651 20:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.651 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.651 "name": "Existed_Raid", 00:11:18.651 "uuid": "4617d527-831d-4779-8675-4fa19d28138a", 00:11:18.651 "strip_size_kb": 64, 00:11:18.651 "state": "online", 00:11:18.651 "raid_level": "concat", 00:11:18.651 "superblock": false, 00:11:18.651 "num_base_bdevs": 4, 00:11:18.651 "num_base_bdevs_discovered": 4, 00:11:18.651 "num_base_bdevs_operational": 4, 00:11:18.651 "base_bdevs_list": [ 00:11:18.651 { 00:11:18.651 "name": "NewBaseBdev", 00:11:18.651 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:18.651 "is_configured": true, 00:11:18.651 "data_offset": 0, 00:11:18.651 "data_size": 65536 00:11:18.651 }, 00:11:18.651 { 00:11:18.651 "name": "BaseBdev2", 00:11:18.651 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:18.651 "is_configured": true, 00:11:18.651 "data_offset": 0, 00:11:18.651 "data_size": 65536 00:11:18.651 }, 00:11:18.651 { 00:11:18.651 "name": "BaseBdev3", 00:11:18.651 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:18.651 "is_configured": true, 00:11:18.651 "data_offset": 0, 00:11:18.651 "data_size": 65536 00:11:18.651 }, 00:11:18.651 { 00:11:18.651 "name": "BaseBdev4", 00:11:18.651 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:18.651 "is_configured": true, 00:11:18.651 "data_offset": 0, 00:11:18.651 "data_size": 65536 00:11:18.651 } 00:11:18.651 ] 00:11:18.651 }' 00:11:18.651 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.651 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.910 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.910 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.910 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.910 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.910 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.910 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.910 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:19.170 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.170 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.170 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.170 [2024-11-26 20:24:12.465863] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.170 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.170 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.170 "name": "Existed_Raid", 00:11:19.170 "aliases": [ 00:11:19.170 "4617d527-831d-4779-8675-4fa19d28138a" 00:11:19.170 ], 00:11:19.170 "product_name": "Raid Volume", 00:11:19.170 "block_size": 512, 00:11:19.170 "num_blocks": 262144, 00:11:19.170 "uuid": "4617d527-831d-4779-8675-4fa19d28138a", 00:11:19.170 "assigned_rate_limits": { 00:11:19.170 "rw_ios_per_sec": 0, 00:11:19.170 "rw_mbytes_per_sec": 0, 00:11:19.170 "r_mbytes_per_sec": 0, 00:11:19.170 "w_mbytes_per_sec": 0 00:11:19.170 }, 00:11:19.170 "claimed": false, 00:11:19.170 "zoned": false, 00:11:19.170 "supported_io_types": { 00:11:19.170 "read": true, 00:11:19.171 "write": true, 00:11:19.171 "unmap": true, 00:11:19.171 "flush": true, 00:11:19.171 "reset": true, 00:11:19.171 "nvme_admin": false, 00:11:19.171 "nvme_io": false, 00:11:19.171 "nvme_io_md": false, 00:11:19.171 "write_zeroes": true, 00:11:19.171 "zcopy": false, 00:11:19.171 "get_zone_info": false, 00:11:19.171 "zone_management": false, 00:11:19.171 "zone_append": false, 00:11:19.171 "compare": false, 00:11:19.171 "compare_and_write": false, 00:11:19.171 "abort": false, 00:11:19.171 "seek_hole": false, 00:11:19.171 "seek_data": false, 00:11:19.171 "copy": false, 00:11:19.171 "nvme_iov_md": false 00:11:19.171 }, 00:11:19.171 "memory_domains": [ 00:11:19.171 { 00:11:19.171 "dma_device_id": "system", 00:11:19.171 "dma_device_type": 1 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.171 "dma_device_type": 2 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "dma_device_id": "system", 00:11:19.171 "dma_device_type": 1 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.171 "dma_device_type": 2 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "dma_device_id": "system", 00:11:19.171 "dma_device_type": 1 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.171 "dma_device_type": 2 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "dma_device_id": "system", 00:11:19.171 "dma_device_type": 1 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.171 "dma_device_type": 2 00:11:19.171 } 00:11:19.171 ], 00:11:19.171 "driver_specific": { 00:11:19.171 "raid": { 00:11:19.171 "uuid": "4617d527-831d-4779-8675-4fa19d28138a", 00:11:19.171 "strip_size_kb": 64, 00:11:19.171 "state": "online", 00:11:19.171 "raid_level": "concat", 00:11:19.171 "superblock": false, 00:11:19.171 "num_base_bdevs": 4, 00:11:19.171 "num_base_bdevs_discovered": 4, 00:11:19.171 "num_base_bdevs_operational": 4, 00:11:19.171 "base_bdevs_list": [ 00:11:19.171 { 00:11:19.171 "name": "NewBaseBdev", 00:11:19.171 "uuid": "3154b7a7-7dec-4952-b776-4713538442a0", 00:11:19.171 "is_configured": true, 00:11:19.171 "data_offset": 0, 00:11:19.171 "data_size": 65536 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "name": "BaseBdev2", 00:11:19.171 "uuid": "1dd66fa7-33fc-4150-b20d-a8b44ffdd0b0", 00:11:19.171 "is_configured": true, 00:11:19.171 "data_offset": 0, 00:11:19.171 "data_size": 65536 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "name": "BaseBdev3", 00:11:19.171 "uuid": "0980955e-5028-49ed-98af-9b57f8e0a7c8", 00:11:19.171 "is_configured": true, 00:11:19.171 "data_offset": 0, 00:11:19.171 "data_size": 65536 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "name": "BaseBdev4", 00:11:19.171 "uuid": "55b71f92-b643-423a-b5f7-3cfe2c62909a", 00:11:19.171 "is_configured": true, 00:11:19.171 "data_offset": 0, 00:11:19.171 "data_size": 65536 00:11:19.171 } 00:11:19.171 ] 00:11:19.171 } 00:11:19.171 } 00:11:19.171 }' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:19.171 BaseBdev2 00:11:19.171 BaseBdev3 00:11:19.171 BaseBdev4' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.171 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.430 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.430 [2024-11-26 20:24:12.796870] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.430 [2024-11-26 20:24:12.796979] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.431 [2024-11-26 20:24:12.797082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.431 [2024-11-26 20:24:12.797161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.431 [2024-11-26 20:24:12.797173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82641 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82641 ']' 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82641 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82641 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82641' 00:11:19.431 killing process with pid 82641 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82641 00:11:19.431 20:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82641 00:11:19.431 [2024-11-26 20:24:12.851115] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.431 [2024-11-26 20:24:12.918018] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.996 20:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:19.996 00:11:19.996 real 0m10.315s 00:11:19.996 user 0m17.479s 00:11:19.996 sys 0m2.020s 00:11:19.996 20:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.996 20:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.996 ************************************ 00:11:19.996 END TEST raid_state_function_test 00:11:19.996 ************************************ 00:11:19.996 20:24:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:19.997 20:24:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:19.997 20:24:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.997 20:24:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 ************************************ 00:11:19.997 START TEST raid_state_function_test_sb 00:11:19.997 ************************************ 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83296 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:19.997 Process raid pid: 83296 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83296' 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83296 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83296 ']' 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.997 20:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.997 [2024-11-26 20:24:13.466769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:19.997 [2024-11-26 20:24:13.467000] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.256 [2024-11-26 20:24:13.632908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.256 [2024-11-26 20:24:13.718340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.256 [2024-11-26 20:24:13.798212] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.256 [2024-11-26 20:24:13.798259] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.191 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.191 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:21.191 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.191 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.191 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.191 [2024-11-26 20:24:14.390264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.192 [2024-11-26 20:24:14.390322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.192 [2024-11-26 20:24:14.390336] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.192 [2024-11-26 20:24:14.390347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.192 [2024-11-26 20:24:14.390354] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.192 [2024-11-26 20:24:14.390367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.192 [2024-11-26 20:24:14.390374] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.192 [2024-11-26 20:24:14.390385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.192 "name": "Existed_Raid", 00:11:21.192 "uuid": "4b70e20d-9e90-4fe9-8a03-565ee625f625", 00:11:21.192 "strip_size_kb": 64, 00:11:21.192 "state": "configuring", 00:11:21.192 "raid_level": "concat", 00:11:21.192 "superblock": true, 00:11:21.192 "num_base_bdevs": 4, 00:11:21.192 "num_base_bdevs_discovered": 0, 00:11:21.192 "num_base_bdevs_operational": 4, 00:11:21.192 "base_bdevs_list": [ 00:11:21.192 { 00:11:21.192 "name": "BaseBdev1", 00:11:21.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.192 "is_configured": false, 00:11:21.192 "data_offset": 0, 00:11:21.192 "data_size": 0 00:11:21.192 }, 00:11:21.192 { 00:11:21.192 "name": "BaseBdev2", 00:11:21.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.192 "is_configured": false, 00:11:21.192 "data_offset": 0, 00:11:21.192 "data_size": 0 00:11:21.192 }, 00:11:21.192 { 00:11:21.192 "name": "BaseBdev3", 00:11:21.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.192 "is_configured": false, 00:11:21.192 "data_offset": 0, 00:11:21.192 "data_size": 0 00:11:21.192 }, 00:11:21.192 { 00:11:21.192 "name": "BaseBdev4", 00:11:21.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.192 "is_configured": false, 00:11:21.192 "data_offset": 0, 00:11:21.192 "data_size": 0 00:11:21.192 } 00:11:21.192 ] 00:11:21.192 }' 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.192 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.451 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.451 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.451 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.451 [2024-11-26 20:24:14.809487] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.451 [2024-11-26 20:24:14.809596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:21.451 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.451 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.451 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.451 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.451 [2024-11-26 20:24:14.817540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.451 [2024-11-26 20:24:14.817653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.451 [2024-11-26 20:24:14.817693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.451 [2024-11-26 20:24:14.817734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.452 [2024-11-26 20:24:14.817765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.452 [2024-11-26 20:24:14.817799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.452 [2024-11-26 20:24:14.817829] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.452 [2024-11-26 20:24:14.817863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.452 [2024-11-26 20:24:14.835622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.452 BaseBdev1 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.452 [ 00:11:21.452 { 00:11:21.452 "name": "BaseBdev1", 00:11:21.452 "aliases": [ 00:11:21.452 "7109b1ab-2c78-4bc7-b276-59aa62b6e9d7" 00:11:21.452 ], 00:11:21.452 "product_name": "Malloc disk", 00:11:21.452 "block_size": 512, 00:11:21.452 "num_blocks": 65536, 00:11:21.452 "uuid": "7109b1ab-2c78-4bc7-b276-59aa62b6e9d7", 00:11:21.452 "assigned_rate_limits": { 00:11:21.452 "rw_ios_per_sec": 0, 00:11:21.452 "rw_mbytes_per_sec": 0, 00:11:21.452 "r_mbytes_per_sec": 0, 00:11:21.452 "w_mbytes_per_sec": 0 00:11:21.452 }, 00:11:21.452 "claimed": true, 00:11:21.452 "claim_type": "exclusive_write", 00:11:21.452 "zoned": false, 00:11:21.452 "supported_io_types": { 00:11:21.452 "read": true, 00:11:21.452 "write": true, 00:11:21.452 "unmap": true, 00:11:21.452 "flush": true, 00:11:21.452 "reset": true, 00:11:21.452 "nvme_admin": false, 00:11:21.452 "nvme_io": false, 00:11:21.452 "nvme_io_md": false, 00:11:21.452 "write_zeroes": true, 00:11:21.452 "zcopy": true, 00:11:21.452 "get_zone_info": false, 00:11:21.452 "zone_management": false, 00:11:21.452 "zone_append": false, 00:11:21.452 "compare": false, 00:11:21.452 "compare_and_write": false, 00:11:21.452 "abort": true, 00:11:21.452 "seek_hole": false, 00:11:21.452 "seek_data": false, 00:11:21.452 "copy": true, 00:11:21.452 "nvme_iov_md": false 00:11:21.452 }, 00:11:21.452 "memory_domains": [ 00:11:21.452 { 00:11:21.452 "dma_device_id": "system", 00:11:21.452 "dma_device_type": 1 00:11:21.452 }, 00:11:21.452 { 00:11:21.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.452 "dma_device_type": 2 00:11:21.452 } 00:11:21.452 ], 00:11:21.452 "driver_specific": {} 00:11:21.452 } 00:11:21.452 ] 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.452 "name": "Existed_Raid", 00:11:21.452 "uuid": "7a985161-00eb-4ef4-b21c-af9d6299e539", 00:11:21.452 "strip_size_kb": 64, 00:11:21.452 "state": "configuring", 00:11:21.452 "raid_level": "concat", 00:11:21.452 "superblock": true, 00:11:21.452 "num_base_bdevs": 4, 00:11:21.452 "num_base_bdevs_discovered": 1, 00:11:21.452 "num_base_bdevs_operational": 4, 00:11:21.452 "base_bdevs_list": [ 00:11:21.452 { 00:11:21.452 "name": "BaseBdev1", 00:11:21.452 "uuid": "7109b1ab-2c78-4bc7-b276-59aa62b6e9d7", 00:11:21.452 "is_configured": true, 00:11:21.452 "data_offset": 2048, 00:11:21.452 "data_size": 63488 00:11:21.452 }, 00:11:21.452 { 00:11:21.452 "name": "BaseBdev2", 00:11:21.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.452 "is_configured": false, 00:11:21.452 "data_offset": 0, 00:11:21.452 "data_size": 0 00:11:21.452 }, 00:11:21.452 { 00:11:21.452 "name": "BaseBdev3", 00:11:21.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.452 "is_configured": false, 00:11:21.452 "data_offset": 0, 00:11:21.452 "data_size": 0 00:11:21.452 }, 00:11:21.452 { 00:11:21.452 "name": "BaseBdev4", 00:11:21.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.452 "is_configured": false, 00:11:21.452 "data_offset": 0, 00:11:21.452 "data_size": 0 00:11:21.452 } 00:11:21.452 ] 00:11:21.452 }' 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.452 20:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.020 [2024-11-26 20:24:15.294922] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.020 [2024-11-26 20:24:15.295050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.020 [2024-11-26 20:24:15.306960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.020 [2024-11-26 20:24:15.309300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.020 [2024-11-26 20:24:15.309406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.020 [2024-11-26 20:24:15.309451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.020 [2024-11-26 20:24:15.309502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.020 [2024-11-26 20:24:15.309535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.020 [2024-11-26 20:24:15.309577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.020 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.020 "name": "Existed_Raid", 00:11:22.020 "uuid": "6f15496f-2174-4523-80c3-d64670246db9", 00:11:22.020 "strip_size_kb": 64, 00:11:22.020 "state": "configuring", 00:11:22.020 "raid_level": "concat", 00:11:22.020 "superblock": true, 00:11:22.020 "num_base_bdevs": 4, 00:11:22.020 "num_base_bdevs_discovered": 1, 00:11:22.020 "num_base_bdevs_operational": 4, 00:11:22.020 "base_bdevs_list": [ 00:11:22.020 { 00:11:22.020 "name": "BaseBdev1", 00:11:22.020 "uuid": "7109b1ab-2c78-4bc7-b276-59aa62b6e9d7", 00:11:22.020 "is_configured": true, 00:11:22.020 "data_offset": 2048, 00:11:22.020 "data_size": 63488 00:11:22.020 }, 00:11:22.020 { 00:11:22.020 "name": "BaseBdev2", 00:11:22.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.020 "is_configured": false, 00:11:22.020 "data_offset": 0, 00:11:22.020 "data_size": 0 00:11:22.020 }, 00:11:22.020 { 00:11:22.020 "name": "BaseBdev3", 00:11:22.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.020 "is_configured": false, 00:11:22.020 "data_offset": 0, 00:11:22.020 "data_size": 0 00:11:22.020 }, 00:11:22.020 { 00:11:22.020 "name": "BaseBdev4", 00:11:22.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.021 "is_configured": false, 00:11:22.021 "data_offset": 0, 00:11:22.021 "data_size": 0 00:11:22.021 } 00:11:22.021 ] 00:11:22.021 }' 00:11:22.021 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.021 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.280 [2024-11-26 20:24:15.778987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.280 BaseBdev2 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.280 [ 00:11:22.280 { 00:11:22.280 "name": "BaseBdev2", 00:11:22.280 "aliases": [ 00:11:22.280 "d6ef2a06-e4b4-490c-97ec-c01f70f85f43" 00:11:22.280 ], 00:11:22.280 "product_name": "Malloc disk", 00:11:22.280 "block_size": 512, 00:11:22.280 "num_blocks": 65536, 00:11:22.280 "uuid": "d6ef2a06-e4b4-490c-97ec-c01f70f85f43", 00:11:22.280 "assigned_rate_limits": { 00:11:22.280 "rw_ios_per_sec": 0, 00:11:22.280 "rw_mbytes_per_sec": 0, 00:11:22.280 "r_mbytes_per_sec": 0, 00:11:22.280 "w_mbytes_per_sec": 0 00:11:22.280 }, 00:11:22.280 "claimed": true, 00:11:22.280 "claim_type": "exclusive_write", 00:11:22.280 "zoned": false, 00:11:22.280 "supported_io_types": { 00:11:22.280 "read": true, 00:11:22.280 "write": true, 00:11:22.280 "unmap": true, 00:11:22.280 "flush": true, 00:11:22.280 "reset": true, 00:11:22.280 "nvme_admin": false, 00:11:22.280 "nvme_io": false, 00:11:22.280 "nvme_io_md": false, 00:11:22.280 "write_zeroes": true, 00:11:22.280 "zcopy": true, 00:11:22.280 "get_zone_info": false, 00:11:22.280 "zone_management": false, 00:11:22.280 "zone_append": false, 00:11:22.280 "compare": false, 00:11:22.280 "compare_and_write": false, 00:11:22.280 "abort": true, 00:11:22.280 "seek_hole": false, 00:11:22.280 "seek_data": false, 00:11:22.280 "copy": true, 00:11:22.280 "nvme_iov_md": false 00:11:22.280 }, 00:11:22.280 "memory_domains": [ 00:11:22.280 { 00:11:22.280 "dma_device_id": "system", 00:11:22.280 "dma_device_type": 1 00:11:22.280 }, 00:11:22.280 { 00:11:22.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.280 "dma_device_type": 2 00:11:22.280 } 00:11:22.280 ], 00:11:22.280 "driver_specific": {} 00:11:22.280 } 00:11:22.280 ] 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.280 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.540 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.540 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.540 "name": "Existed_Raid", 00:11:22.540 "uuid": "6f15496f-2174-4523-80c3-d64670246db9", 00:11:22.540 "strip_size_kb": 64, 00:11:22.540 "state": "configuring", 00:11:22.540 "raid_level": "concat", 00:11:22.540 "superblock": true, 00:11:22.540 "num_base_bdevs": 4, 00:11:22.540 "num_base_bdevs_discovered": 2, 00:11:22.540 "num_base_bdevs_operational": 4, 00:11:22.540 "base_bdevs_list": [ 00:11:22.540 { 00:11:22.540 "name": "BaseBdev1", 00:11:22.540 "uuid": "7109b1ab-2c78-4bc7-b276-59aa62b6e9d7", 00:11:22.540 "is_configured": true, 00:11:22.540 "data_offset": 2048, 00:11:22.540 "data_size": 63488 00:11:22.540 }, 00:11:22.540 { 00:11:22.540 "name": "BaseBdev2", 00:11:22.540 "uuid": "d6ef2a06-e4b4-490c-97ec-c01f70f85f43", 00:11:22.540 "is_configured": true, 00:11:22.540 "data_offset": 2048, 00:11:22.540 "data_size": 63488 00:11:22.540 }, 00:11:22.540 { 00:11:22.540 "name": "BaseBdev3", 00:11:22.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.540 "is_configured": false, 00:11:22.540 "data_offset": 0, 00:11:22.540 "data_size": 0 00:11:22.540 }, 00:11:22.540 { 00:11:22.540 "name": "BaseBdev4", 00:11:22.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.540 "is_configured": false, 00:11:22.540 "data_offset": 0, 00:11:22.540 "data_size": 0 00:11:22.540 } 00:11:22.540 ] 00:11:22.540 }' 00:11:22.540 20:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.540 20:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.802 [2024-11-26 20:24:16.320157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.802 BaseBdev3 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.802 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.802 [ 00:11:22.802 { 00:11:22.802 "name": "BaseBdev3", 00:11:22.802 "aliases": [ 00:11:22.802 "da14d821-d16f-4e9c-8f90-39d108bbdf73" 00:11:22.802 ], 00:11:22.802 "product_name": "Malloc disk", 00:11:22.802 "block_size": 512, 00:11:22.802 "num_blocks": 65536, 00:11:22.802 "uuid": "da14d821-d16f-4e9c-8f90-39d108bbdf73", 00:11:22.802 "assigned_rate_limits": { 00:11:22.802 "rw_ios_per_sec": 0, 00:11:22.802 "rw_mbytes_per_sec": 0, 00:11:22.802 "r_mbytes_per_sec": 0, 00:11:22.802 "w_mbytes_per_sec": 0 00:11:22.802 }, 00:11:22.802 "claimed": true, 00:11:22.802 "claim_type": "exclusive_write", 00:11:22.802 "zoned": false, 00:11:22.802 "supported_io_types": { 00:11:22.802 "read": true, 00:11:22.802 "write": true, 00:11:22.802 "unmap": true, 00:11:22.802 "flush": true, 00:11:22.802 "reset": true, 00:11:22.802 "nvme_admin": false, 00:11:22.802 "nvme_io": false, 00:11:22.802 "nvme_io_md": false, 00:11:23.068 "write_zeroes": true, 00:11:23.068 "zcopy": true, 00:11:23.068 "get_zone_info": false, 00:11:23.068 "zone_management": false, 00:11:23.068 "zone_append": false, 00:11:23.068 "compare": false, 00:11:23.068 "compare_and_write": false, 00:11:23.068 "abort": true, 00:11:23.068 "seek_hole": false, 00:11:23.068 "seek_data": false, 00:11:23.068 "copy": true, 00:11:23.068 "nvme_iov_md": false 00:11:23.068 }, 00:11:23.068 "memory_domains": [ 00:11:23.068 { 00:11:23.068 "dma_device_id": "system", 00:11:23.068 "dma_device_type": 1 00:11:23.068 }, 00:11:23.068 { 00:11:23.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.068 "dma_device_type": 2 00:11:23.068 } 00:11:23.068 ], 00:11:23.068 "driver_specific": {} 00:11:23.068 } 00:11:23.068 ] 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.068 "name": "Existed_Raid", 00:11:23.068 "uuid": "6f15496f-2174-4523-80c3-d64670246db9", 00:11:23.068 "strip_size_kb": 64, 00:11:23.068 "state": "configuring", 00:11:23.068 "raid_level": "concat", 00:11:23.068 "superblock": true, 00:11:23.068 "num_base_bdevs": 4, 00:11:23.068 "num_base_bdevs_discovered": 3, 00:11:23.068 "num_base_bdevs_operational": 4, 00:11:23.068 "base_bdevs_list": [ 00:11:23.068 { 00:11:23.068 "name": "BaseBdev1", 00:11:23.068 "uuid": "7109b1ab-2c78-4bc7-b276-59aa62b6e9d7", 00:11:23.068 "is_configured": true, 00:11:23.068 "data_offset": 2048, 00:11:23.068 "data_size": 63488 00:11:23.068 }, 00:11:23.068 { 00:11:23.068 "name": "BaseBdev2", 00:11:23.068 "uuid": "d6ef2a06-e4b4-490c-97ec-c01f70f85f43", 00:11:23.068 "is_configured": true, 00:11:23.068 "data_offset": 2048, 00:11:23.068 "data_size": 63488 00:11:23.068 }, 00:11:23.068 { 00:11:23.068 "name": "BaseBdev3", 00:11:23.068 "uuid": "da14d821-d16f-4e9c-8f90-39d108bbdf73", 00:11:23.068 "is_configured": true, 00:11:23.068 "data_offset": 2048, 00:11:23.068 "data_size": 63488 00:11:23.068 }, 00:11:23.068 { 00:11:23.068 "name": "BaseBdev4", 00:11:23.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.068 "is_configured": false, 00:11:23.068 "data_offset": 0, 00:11:23.068 "data_size": 0 00:11:23.068 } 00:11:23.068 ] 00:11:23.068 }' 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.068 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 [2024-11-26 20:24:16.829015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.328 [2024-11-26 20:24:16.829262] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:23.328 [2024-11-26 20:24:16.829280] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:23.328 BaseBdev4 00:11:23.328 [2024-11-26 20:24:16.829636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:23.328 [2024-11-26 20:24:16.829787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:23.328 [2024-11-26 20:24:16.829806] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:23.328 [2024-11-26 20:24:16.829952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 [ 00:11:23.328 { 00:11:23.328 "name": "BaseBdev4", 00:11:23.328 "aliases": [ 00:11:23.328 "cd09963e-444d-4ea7-8f94-b9221bedcdb0" 00:11:23.328 ], 00:11:23.328 "product_name": "Malloc disk", 00:11:23.328 "block_size": 512, 00:11:23.328 "num_blocks": 65536, 00:11:23.328 "uuid": "cd09963e-444d-4ea7-8f94-b9221bedcdb0", 00:11:23.328 "assigned_rate_limits": { 00:11:23.328 "rw_ios_per_sec": 0, 00:11:23.328 "rw_mbytes_per_sec": 0, 00:11:23.328 "r_mbytes_per_sec": 0, 00:11:23.328 "w_mbytes_per_sec": 0 00:11:23.328 }, 00:11:23.328 "claimed": true, 00:11:23.328 "claim_type": "exclusive_write", 00:11:23.328 "zoned": false, 00:11:23.328 "supported_io_types": { 00:11:23.328 "read": true, 00:11:23.328 "write": true, 00:11:23.328 "unmap": true, 00:11:23.328 "flush": true, 00:11:23.328 "reset": true, 00:11:23.328 "nvme_admin": false, 00:11:23.328 "nvme_io": false, 00:11:23.328 "nvme_io_md": false, 00:11:23.328 "write_zeroes": true, 00:11:23.328 "zcopy": true, 00:11:23.328 "get_zone_info": false, 00:11:23.328 "zone_management": false, 00:11:23.328 "zone_append": false, 00:11:23.328 "compare": false, 00:11:23.328 "compare_and_write": false, 00:11:23.328 "abort": true, 00:11:23.328 "seek_hole": false, 00:11:23.328 "seek_data": false, 00:11:23.328 "copy": true, 00:11:23.328 "nvme_iov_md": false 00:11:23.328 }, 00:11:23.328 "memory_domains": [ 00:11:23.328 { 00:11:23.328 "dma_device_id": "system", 00:11:23.328 "dma_device_type": 1 00:11:23.328 }, 00:11:23.328 { 00:11:23.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.328 "dma_device_type": 2 00:11:23.328 } 00:11:23.328 ], 00:11:23.328 "driver_specific": {} 00:11:23.328 } 00:11:23.328 ] 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.588 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.588 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.588 "name": "Existed_Raid", 00:11:23.588 "uuid": "6f15496f-2174-4523-80c3-d64670246db9", 00:11:23.588 "strip_size_kb": 64, 00:11:23.588 "state": "online", 00:11:23.588 "raid_level": "concat", 00:11:23.588 "superblock": true, 00:11:23.588 "num_base_bdevs": 4, 00:11:23.588 "num_base_bdevs_discovered": 4, 00:11:23.588 "num_base_bdevs_operational": 4, 00:11:23.588 "base_bdevs_list": [ 00:11:23.588 { 00:11:23.588 "name": "BaseBdev1", 00:11:23.588 "uuid": "7109b1ab-2c78-4bc7-b276-59aa62b6e9d7", 00:11:23.588 "is_configured": true, 00:11:23.588 "data_offset": 2048, 00:11:23.588 "data_size": 63488 00:11:23.588 }, 00:11:23.588 { 00:11:23.588 "name": "BaseBdev2", 00:11:23.588 "uuid": "d6ef2a06-e4b4-490c-97ec-c01f70f85f43", 00:11:23.588 "is_configured": true, 00:11:23.588 "data_offset": 2048, 00:11:23.588 "data_size": 63488 00:11:23.588 }, 00:11:23.588 { 00:11:23.588 "name": "BaseBdev3", 00:11:23.588 "uuid": "da14d821-d16f-4e9c-8f90-39d108bbdf73", 00:11:23.588 "is_configured": true, 00:11:23.588 "data_offset": 2048, 00:11:23.588 "data_size": 63488 00:11:23.588 }, 00:11:23.588 { 00:11:23.588 "name": "BaseBdev4", 00:11:23.588 "uuid": "cd09963e-444d-4ea7-8f94-b9221bedcdb0", 00:11:23.588 "is_configured": true, 00:11:23.588 "data_offset": 2048, 00:11:23.588 "data_size": 63488 00:11:23.588 } 00:11:23.588 ] 00:11:23.588 }' 00:11:23.588 20:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.588 20:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.847 [2024-11-26 20:24:17.301146] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.847 "name": "Existed_Raid", 00:11:23.847 "aliases": [ 00:11:23.847 "6f15496f-2174-4523-80c3-d64670246db9" 00:11:23.847 ], 00:11:23.847 "product_name": "Raid Volume", 00:11:23.847 "block_size": 512, 00:11:23.847 "num_blocks": 253952, 00:11:23.847 "uuid": "6f15496f-2174-4523-80c3-d64670246db9", 00:11:23.847 "assigned_rate_limits": { 00:11:23.847 "rw_ios_per_sec": 0, 00:11:23.847 "rw_mbytes_per_sec": 0, 00:11:23.847 "r_mbytes_per_sec": 0, 00:11:23.847 "w_mbytes_per_sec": 0 00:11:23.847 }, 00:11:23.847 "claimed": false, 00:11:23.847 "zoned": false, 00:11:23.847 "supported_io_types": { 00:11:23.847 "read": true, 00:11:23.847 "write": true, 00:11:23.847 "unmap": true, 00:11:23.847 "flush": true, 00:11:23.847 "reset": true, 00:11:23.847 "nvme_admin": false, 00:11:23.847 "nvme_io": false, 00:11:23.847 "nvme_io_md": false, 00:11:23.847 "write_zeroes": true, 00:11:23.847 "zcopy": false, 00:11:23.847 "get_zone_info": false, 00:11:23.847 "zone_management": false, 00:11:23.847 "zone_append": false, 00:11:23.847 "compare": false, 00:11:23.847 "compare_and_write": false, 00:11:23.847 "abort": false, 00:11:23.847 "seek_hole": false, 00:11:23.847 "seek_data": false, 00:11:23.847 "copy": false, 00:11:23.847 "nvme_iov_md": false 00:11:23.847 }, 00:11:23.847 "memory_domains": [ 00:11:23.847 { 00:11:23.847 "dma_device_id": "system", 00:11:23.847 "dma_device_type": 1 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.847 "dma_device_type": 2 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "dma_device_id": "system", 00:11:23.847 "dma_device_type": 1 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.847 "dma_device_type": 2 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "dma_device_id": "system", 00:11:23.847 "dma_device_type": 1 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.847 "dma_device_type": 2 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "dma_device_id": "system", 00:11:23.847 "dma_device_type": 1 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.847 "dma_device_type": 2 00:11:23.847 } 00:11:23.847 ], 00:11:23.847 "driver_specific": { 00:11:23.847 "raid": { 00:11:23.847 "uuid": "6f15496f-2174-4523-80c3-d64670246db9", 00:11:23.847 "strip_size_kb": 64, 00:11:23.847 "state": "online", 00:11:23.847 "raid_level": "concat", 00:11:23.847 "superblock": true, 00:11:23.847 "num_base_bdevs": 4, 00:11:23.847 "num_base_bdevs_discovered": 4, 00:11:23.847 "num_base_bdevs_operational": 4, 00:11:23.847 "base_bdevs_list": [ 00:11:23.847 { 00:11:23.847 "name": "BaseBdev1", 00:11:23.847 "uuid": "7109b1ab-2c78-4bc7-b276-59aa62b6e9d7", 00:11:23.847 "is_configured": true, 00:11:23.847 "data_offset": 2048, 00:11:23.847 "data_size": 63488 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "name": "BaseBdev2", 00:11:23.847 "uuid": "d6ef2a06-e4b4-490c-97ec-c01f70f85f43", 00:11:23.847 "is_configured": true, 00:11:23.847 "data_offset": 2048, 00:11:23.847 "data_size": 63488 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "name": "BaseBdev3", 00:11:23.847 "uuid": "da14d821-d16f-4e9c-8f90-39d108bbdf73", 00:11:23.847 "is_configured": true, 00:11:23.847 "data_offset": 2048, 00:11:23.847 "data_size": 63488 00:11:23.847 }, 00:11:23.847 { 00:11:23.847 "name": "BaseBdev4", 00:11:23.847 "uuid": "cd09963e-444d-4ea7-8f94-b9221bedcdb0", 00:11:23.847 "is_configured": true, 00:11:23.847 "data_offset": 2048, 00:11:23.847 "data_size": 63488 00:11:23.847 } 00:11:23.847 ] 00:11:23.847 } 00:11:23.847 } 00:11:23.847 }' 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:23.847 BaseBdev2 00:11:23.847 BaseBdev3 00:11:23.847 BaseBdev4' 00:11:23.847 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.107 [2024-11-26 20:24:17.636686] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.107 [2024-11-26 20:24:17.636724] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.107 [2024-11-26 20:24:17.636792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.107 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.367 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.367 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.367 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.367 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.367 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.367 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.367 "name": "Existed_Raid", 00:11:24.367 "uuid": "6f15496f-2174-4523-80c3-d64670246db9", 00:11:24.367 "strip_size_kb": 64, 00:11:24.367 "state": "offline", 00:11:24.367 "raid_level": "concat", 00:11:24.367 "superblock": true, 00:11:24.367 "num_base_bdevs": 4, 00:11:24.367 "num_base_bdevs_discovered": 3, 00:11:24.367 "num_base_bdevs_operational": 3, 00:11:24.367 "base_bdevs_list": [ 00:11:24.367 { 00:11:24.367 "name": null, 00:11:24.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.367 "is_configured": false, 00:11:24.367 "data_offset": 0, 00:11:24.367 "data_size": 63488 00:11:24.367 }, 00:11:24.367 { 00:11:24.367 "name": "BaseBdev2", 00:11:24.367 "uuid": "d6ef2a06-e4b4-490c-97ec-c01f70f85f43", 00:11:24.367 "is_configured": true, 00:11:24.367 "data_offset": 2048, 00:11:24.367 "data_size": 63488 00:11:24.367 }, 00:11:24.367 { 00:11:24.367 "name": "BaseBdev3", 00:11:24.367 "uuid": "da14d821-d16f-4e9c-8f90-39d108bbdf73", 00:11:24.367 "is_configured": true, 00:11:24.367 "data_offset": 2048, 00:11:24.367 "data_size": 63488 00:11:24.367 }, 00:11:24.367 { 00:11:24.367 "name": "BaseBdev4", 00:11:24.367 "uuid": "cd09963e-444d-4ea7-8f94-b9221bedcdb0", 00:11:24.367 "is_configured": true, 00:11:24.367 "data_offset": 2048, 00:11:24.367 "data_size": 63488 00:11:24.367 } 00:11:24.367 ] 00:11:24.367 }' 00:11:24.367 20:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.367 20:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.627 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:24.627 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.627 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.627 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.627 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.627 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.627 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 [2024-11-26 20:24:18.192631] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 [2024-11-26 20:24:18.277330] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 [2024-11-26 20:24:18.353184] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:24.887 [2024-11-26 20:24:18.353313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.887 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.151 BaseBdev2 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.151 [ 00:11:25.151 { 00:11:25.151 "name": "BaseBdev2", 00:11:25.151 "aliases": [ 00:11:25.151 "80414a4d-c974-484b-87de-2ea7320e76a7" 00:11:25.151 ], 00:11:25.151 "product_name": "Malloc disk", 00:11:25.151 "block_size": 512, 00:11:25.151 "num_blocks": 65536, 00:11:25.151 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:25.151 "assigned_rate_limits": { 00:11:25.151 "rw_ios_per_sec": 0, 00:11:25.151 "rw_mbytes_per_sec": 0, 00:11:25.151 "r_mbytes_per_sec": 0, 00:11:25.151 "w_mbytes_per_sec": 0 00:11:25.151 }, 00:11:25.151 "claimed": false, 00:11:25.151 "zoned": false, 00:11:25.151 "supported_io_types": { 00:11:25.151 "read": true, 00:11:25.151 "write": true, 00:11:25.151 "unmap": true, 00:11:25.151 "flush": true, 00:11:25.151 "reset": true, 00:11:25.151 "nvme_admin": false, 00:11:25.151 "nvme_io": false, 00:11:25.151 "nvme_io_md": false, 00:11:25.151 "write_zeroes": true, 00:11:25.151 "zcopy": true, 00:11:25.151 "get_zone_info": false, 00:11:25.151 "zone_management": false, 00:11:25.151 "zone_append": false, 00:11:25.151 "compare": false, 00:11:25.151 "compare_and_write": false, 00:11:25.151 "abort": true, 00:11:25.151 "seek_hole": false, 00:11:25.151 "seek_data": false, 00:11:25.151 "copy": true, 00:11:25.151 "nvme_iov_md": false 00:11:25.151 }, 00:11:25.151 "memory_domains": [ 00:11:25.151 { 00:11:25.151 "dma_device_id": "system", 00:11:25.151 "dma_device_type": 1 00:11:25.151 }, 00:11:25.151 { 00:11:25.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.151 "dma_device_type": 2 00:11:25.151 } 00:11:25.151 ], 00:11:25.151 "driver_specific": {} 00:11:25.151 } 00:11:25.151 ] 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.151 BaseBdev3 00:11:25.151 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 [ 00:11:25.152 { 00:11:25.152 "name": "BaseBdev3", 00:11:25.152 "aliases": [ 00:11:25.152 "bfc51eac-7a35-4016-8038-fd70a8208202" 00:11:25.152 ], 00:11:25.152 "product_name": "Malloc disk", 00:11:25.152 "block_size": 512, 00:11:25.152 "num_blocks": 65536, 00:11:25.152 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:25.152 "assigned_rate_limits": { 00:11:25.152 "rw_ios_per_sec": 0, 00:11:25.152 "rw_mbytes_per_sec": 0, 00:11:25.152 "r_mbytes_per_sec": 0, 00:11:25.152 "w_mbytes_per_sec": 0 00:11:25.152 }, 00:11:25.152 "claimed": false, 00:11:25.152 "zoned": false, 00:11:25.152 "supported_io_types": { 00:11:25.152 "read": true, 00:11:25.152 "write": true, 00:11:25.152 "unmap": true, 00:11:25.152 "flush": true, 00:11:25.152 "reset": true, 00:11:25.152 "nvme_admin": false, 00:11:25.152 "nvme_io": false, 00:11:25.152 "nvme_io_md": false, 00:11:25.152 "write_zeroes": true, 00:11:25.152 "zcopy": true, 00:11:25.152 "get_zone_info": false, 00:11:25.152 "zone_management": false, 00:11:25.152 "zone_append": false, 00:11:25.152 "compare": false, 00:11:25.152 "compare_and_write": false, 00:11:25.152 "abort": true, 00:11:25.152 "seek_hole": false, 00:11:25.152 "seek_data": false, 00:11:25.152 "copy": true, 00:11:25.152 "nvme_iov_md": false 00:11:25.152 }, 00:11:25.152 "memory_domains": [ 00:11:25.152 { 00:11:25.152 "dma_device_id": "system", 00:11:25.152 "dma_device_type": 1 00:11:25.152 }, 00:11:25.152 { 00:11:25.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.152 "dma_device_type": 2 00:11:25.152 } 00:11:25.152 ], 00:11:25.152 "driver_specific": {} 00:11:25.152 } 00:11:25.152 ] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 BaseBdev4 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 [ 00:11:25.152 { 00:11:25.152 "name": "BaseBdev4", 00:11:25.152 "aliases": [ 00:11:25.152 "c9dfab88-1b57-4626-9dee-979ff5262098" 00:11:25.152 ], 00:11:25.152 "product_name": "Malloc disk", 00:11:25.152 "block_size": 512, 00:11:25.152 "num_blocks": 65536, 00:11:25.152 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:25.152 "assigned_rate_limits": { 00:11:25.152 "rw_ios_per_sec": 0, 00:11:25.152 "rw_mbytes_per_sec": 0, 00:11:25.152 "r_mbytes_per_sec": 0, 00:11:25.152 "w_mbytes_per_sec": 0 00:11:25.152 }, 00:11:25.152 "claimed": false, 00:11:25.152 "zoned": false, 00:11:25.152 "supported_io_types": { 00:11:25.152 "read": true, 00:11:25.152 "write": true, 00:11:25.152 "unmap": true, 00:11:25.152 "flush": true, 00:11:25.152 "reset": true, 00:11:25.152 "nvme_admin": false, 00:11:25.152 "nvme_io": false, 00:11:25.152 "nvme_io_md": false, 00:11:25.152 "write_zeroes": true, 00:11:25.152 "zcopy": true, 00:11:25.152 "get_zone_info": false, 00:11:25.152 "zone_management": false, 00:11:25.152 "zone_append": false, 00:11:25.152 "compare": false, 00:11:25.152 "compare_and_write": false, 00:11:25.152 "abort": true, 00:11:25.152 "seek_hole": false, 00:11:25.152 "seek_data": false, 00:11:25.152 "copy": true, 00:11:25.152 "nvme_iov_md": false 00:11:25.152 }, 00:11:25.152 "memory_domains": [ 00:11:25.152 { 00:11:25.152 "dma_device_id": "system", 00:11:25.152 "dma_device_type": 1 00:11:25.152 }, 00:11:25.152 { 00:11:25.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.152 "dma_device_type": 2 00:11:25.152 } 00:11:25.152 ], 00:11:25.152 "driver_specific": {} 00:11:25.152 } 00:11:25.152 ] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 [2024-11-26 20:24:18.586510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.152 [2024-11-26 20:24:18.586671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.152 [2024-11-26 20:24:18.586737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.152 [2024-11-26 20:24:18.589017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.152 [2024-11-26 20:24:18.589137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.152 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.152 "name": "Existed_Raid", 00:11:25.152 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:25.152 "strip_size_kb": 64, 00:11:25.152 "state": "configuring", 00:11:25.152 "raid_level": "concat", 00:11:25.152 "superblock": true, 00:11:25.152 "num_base_bdevs": 4, 00:11:25.152 "num_base_bdevs_discovered": 3, 00:11:25.152 "num_base_bdevs_operational": 4, 00:11:25.152 "base_bdevs_list": [ 00:11:25.152 { 00:11:25.152 "name": "BaseBdev1", 00:11:25.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.153 "is_configured": false, 00:11:25.153 "data_offset": 0, 00:11:25.153 "data_size": 0 00:11:25.153 }, 00:11:25.153 { 00:11:25.153 "name": "BaseBdev2", 00:11:25.153 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:25.153 "is_configured": true, 00:11:25.153 "data_offset": 2048, 00:11:25.153 "data_size": 63488 00:11:25.153 }, 00:11:25.153 { 00:11:25.153 "name": "BaseBdev3", 00:11:25.153 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:25.153 "is_configured": true, 00:11:25.153 "data_offset": 2048, 00:11:25.153 "data_size": 63488 00:11:25.153 }, 00:11:25.153 { 00:11:25.153 "name": "BaseBdev4", 00:11:25.153 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:25.153 "is_configured": true, 00:11:25.153 "data_offset": 2048, 00:11:25.153 "data_size": 63488 00:11:25.153 } 00:11:25.153 ] 00:11:25.153 }' 00:11:25.153 20:24:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.153 20:24:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 [2024-11-26 20:24:19.057712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.720 "name": "Existed_Raid", 00:11:25.720 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:25.720 "strip_size_kb": 64, 00:11:25.720 "state": "configuring", 00:11:25.720 "raid_level": "concat", 00:11:25.720 "superblock": true, 00:11:25.720 "num_base_bdevs": 4, 00:11:25.720 "num_base_bdevs_discovered": 2, 00:11:25.720 "num_base_bdevs_operational": 4, 00:11:25.720 "base_bdevs_list": [ 00:11:25.720 { 00:11:25.720 "name": "BaseBdev1", 00:11:25.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.720 "is_configured": false, 00:11:25.720 "data_offset": 0, 00:11:25.720 "data_size": 0 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "name": null, 00:11:25.720 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:25.720 "is_configured": false, 00:11:25.720 "data_offset": 0, 00:11:25.720 "data_size": 63488 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "name": "BaseBdev3", 00:11:25.720 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:25.720 "is_configured": true, 00:11:25.720 "data_offset": 2048, 00:11:25.720 "data_size": 63488 00:11:25.720 }, 00:11:25.720 { 00:11:25.720 "name": "BaseBdev4", 00:11:25.720 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:25.720 "is_configured": true, 00:11:25.720 "data_offset": 2048, 00:11:25.720 "data_size": 63488 00:11:25.720 } 00:11:25.720 ] 00:11:25.720 }' 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.720 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.287 [2024-11-26 20:24:19.620645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.287 BaseBdev1 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.287 [ 00:11:26.287 { 00:11:26.287 "name": "BaseBdev1", 00:11:26.287 "aliases": [ 00:11:26.287 "3fc53af2-62b3-43ef-985f-c253dde65173" 00:11:26.287 ], 00:11:26.287 "product_name": "Malloc disk", 00:11:26.287 "block_size": 512, 00:11:26.287 "num_blocks": 65536, 00:11:26.287 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:26.287 "assigned_rate_limits": { 00:11:26.287 "rw_ios_per_sec": 0, 00:11:26.287 "rw_mbytes_per_sec": 0, 00:11:26.287 "r_mbytes_per_sec": 0, 00:11:26.287 "w_mbytes_per_sec": 0 00:11:26.287 }, 00:11:26.287 "claimed": true, 00:11:26.287 "claim_type": "exclusive_write", 00:11:26.287 "zoned": false, 00:11:26.287 "supported_io_types": { 00:11:26.287 "read": true, 00:11:26.287 "write": true, 00:11:26.287 "unmap": true, 00:11:26.287 "flush": true, 00:11:26.287 "reset": true, 00:11:26.287 "nvme_admin": false, 00:11:26.287 "nvme_io": false, 00:11:26.287 "nvme_io_md": false, 00:11:26.287 "write_zeroes": true, 00:11:26.287 "zcopy": true, 00:11:26.287 "get_zone_info": false, 00:11:26.287 "zone_management": false, 00:11:26.287 "zone_append": false, 00:11:26.287 "compare": false, 00:11:26.287 "compare_and_write": false, 00:11:26.287 "abort": true, 00:11:26.287 "seek_hole": false, 00:11:26.287 "seek_data": false, 00:11:26.287 "copy": true, 00:11:26.287 "nvme_iov_md": false 00:11:26.287 }, 00:11:26.287 "memory_domains": [ 00:11:26.287 { 00:11:26.287 "dma_device_id": "system", 00:11:26.287 "dma_device_type": 1 00:11:26.287 }, 00:11:26.287 { 00:11:26.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.287 "dma_device_type": 2 00:11:26.287 } 00:11:26.287 ], 00:11:26.287 "driver_specific": {} 00:11:26.287 } 00:11:26.287 ] 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.287 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.287 "name": "Existed_Raid", 00:11:26.287 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:26.287 "strip_size_kb": 64, 00:11:26.287 "state": "configuring", 00:11:26.287 "raid_level": "concat", 00:11:26.287 "superblock": true, 00:11:26.287 "num_base_bdevs": 4, 00:11:26.287 "num_base_bdevs_discovered": 3, 00:11:26.287 "num_base_bdevs_operational": 4, 00:11:26.287 "base_bdevs_list": [ 00:11:26.287 { 00:11:26.287 "name": "BaseBdev1", 00:11:26.287 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:26.287 "is_configured": true, 00:11:26.287 "data_offset": 2048, 00:11:26.287 "data_size": 63488 00:11:26.287 }, 00:11:26.287 { 00:11:26.287 "name": null, 00:11:26.287 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:26.287 "is_configured": false, 00:11:26.287 "data_offset": 0, 00:11:26.287 "data_size": 63488 00:11:26.287 }, 00:11:26.287 { 00:11:26.287 "name": "BaseBdev3", 00:11:26.287 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:26.287 "is_configured": true, 00:11:26.287 "data_offset": 2048, 00:11:26.287 "data_size": 63488 00:11:26.287 }, 00:11:26.287 { 00:11:26.287 "name": "BaseBdev4", 00:11:26.287 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:26.287 "is_configured": true, 00:11:26.287 "data_offset": 2048, 00:11:26.287 "data_size": 63488 00:11:26.287 } 00:11:26.287 ] 00:11:26.287 }' 00:11:26.288 20:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.288 20:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.889 [2024-11-26 20:24:20.163782] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.889 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.889 "name": "Existed_Raid", 00:11:26.889 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:26.889 "strip_size_kb": 64, 00:11:26.889 "state": "configuring", 00:11:26.889 "raid_level": "concat", 00:11:26.889 "superblock": true, 00:11:26.889 "num_base_bdevs": 4, 00:11:26.889 "num_base_bdevs_discovered": 2, 00:11:26.889 "num_base_bdevs_operational": 4, 00:11:26.889 "base_bdevs_list": [ 00:11:26.889 { 00:11:26.889 "name": "BaseBdev1", 00:11:26.889 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:26.889 "is_configured": true, 00:11:26.889 "data_offset": 2048, 00:11:26.889 "data_size": 63488 00:11:26.889 }, 00:11:26.889 { 00:11:26.889 "name": null, 00:11:26.889 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:26.889 "is_configured": false, 00:11:26.889 "data_offset": 0, 00:11:26.890 "data_size": 63488 00:11:26.890 }, 00:11:26.890 { 00:11:26.890 "name": null, 00:11:26.890 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:26.890 "is_configured": false, 00:11:26.890 "data_offset": 0, 00:11:26.890 "data_size": 63488 00:11:26.890 }, 00:11:26.890 { 00:11:26.890 "name": "BaseBdev4", 00:11:26.890 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:26.890 "is_configured": true, 00:11:26.890 "data_offset": 2048, 00:11:26.890 "data_size": 63488 00:11:26.890 } 00:11:26.890 ] 00:11:26.890 }' 00:11:26.890 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.890 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.152 [2024-11-26 20:24:20.694953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.152 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.412 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.412 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.412 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.412 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.412 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.412 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.412 "name": "Existed_Raid", 00:11:27.412 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:27.412 "strip_size_kb": 64, 00:11:27.412 "state": "configuring", 00:11:27.412 "raid_level": "concat", 00:11:27.412 "superblock": true, 00:11:27.412 "num_base_bdevs": 4, 00:11:27.412 "num_base_bdevs_discovered": 3, 00:11:27.412 "num_base_bdevs_operational": 4, 00:11:27.413 "base_bdevs_list": [ 00:11:27.413 { 00:11:27.413 "name": "BaseBdev1", 00:11:27.413 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:27.413 "is_configured": true, 00:11:27.413 "data_offset": 2048, 00:11:27.413 "data_size": 63488 00:11:27.413 }, 00:11:27.413 { 00:11:27.413 "name": null, 00:11:27.413 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:27.413 "is_configured": false, 00:11:27.413 "data_offset": 0, 00:11:27.413 "data_size": 63488 00:11:27.413 }, 00:11:27.413 { 00:11:27.413 "name": "BaseBdev3", 00:11:27.413 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:27.413 "is_configured": true, 00:11:27.413 "data_offset": 2048, 00:11:27.413 "data_size": 63488 00:11:27.413 }, 00:11:27.413 { 00:11:27.413 "name": "BaseBdev4", 00:11:27.413 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:27.413 "is_configured": true, 00:11:27.413 "data_offset": 2048, 00:11:27.413 "data_size": 63488 00:11:27.413 } 00:11:27.413 ] 00:11:27.413 }' 00:11:27.413 20:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.413 20:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.671 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.671 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.671 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.671 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.671 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.929 [2024-11-26 20:24:21.238078] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.929 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.929 "name": "Existed_Raid", 00:11:27.929 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:27.929 "strip_size_kb": 64, 00:11:27.929 "state": "configuring", 00:11:27.929 "raid_level": "concat", 00:11:27.929 "superblock": true, 00:11:27.929 "num_base_bdevs": 4, 00:11:27.929 "num_base_bdevs_discovered": 2, 00:11:27.929 "num_base_bdevs_operational": 4, 00:11:27.929 "base_bdevs_list": [ 00:11:27.929 { 00:11:27.929 "name": null, 00:11:27.929 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:27.929 "is_configured": false, 00:11:27.929 "data_offset": 0, 00:11:27.929 "data_size": 63488 00:11:27.929 }, 00:11:27.929 { 00:11:27.929 "name": null, 00:11:27.929 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:27.929 "is_configured": false, 00:11:27.929 "data_offset": 0, 00:11:27.929 "data_size": 63488 00:11:27.929 }, 00:11:27.929 { 00:11:27.929 "name": "BaseBdev3", 00:11:27.929 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:27.929 "is_configured": true, 00:11:27.929 "data_offset": 2048, 00:11:27.929 "data_size": 63488 00:11:27.929 }, 00:11:27.929 { 00:11:27.929 "name": "BaseBdev4", 00:11:27.929 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:27.929 "is_configured": true, 00:11:27.929 "data_offset": 2048, 00:11:27.929 "data_size": 63488 00:11:27.930 } 00:11:27.930 ] 00:11:27.930 }' 00:11:27.930 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.930 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.188 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.188 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.188 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.188 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.188 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.446 [2024-11-26 20:24:21.768546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.446 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.447 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.447 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.447 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.447 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.447 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.447 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.447 "name": "Existed_Raid", 00:11:28.447 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:28.447 "strip_size_kb": 64, 00:11:28.447 "state": "configuring", 00:11:28.447 "raid_level": "concat", 00:11:28.447 "superblock": true, 00:11:28.447 "num_base_bdevs": 4, 00:11:28.447 "num_base_bdevs_discovered": 3, 00:11:28.447 "num_base_bdevs_operational": 4, 00:11:28.447 "base_bdevs_list": [ 00:11:28.447 { 00:11:28.447 "name": null, 00:11:28.447 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:28.447 "is_configured": false, 00:11:28.447 "data_offset": 0, 00:11:28.447 "data_size": 63488 00:11:28.447 }, 00:11:28.447 { 00:11:28.447 "name": "BaseBdev2", 00:11:28.447 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:28.447 "is_configured": true, 00:11:28.447 "data_offset": 2048, 00:11:28.447 "data_size": 63488 00:11:28.447 }, 00:11:28.447 { 00:11:28.447 "name": "BaseBdev3", 00:11:28.447 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:28.447 "is_configured": true, 00:11:28.447 "data_offset": 2048, 00:11:28.447 "data_size": 63488 00:11:28.447 }, 00:11:28.447 { 00:11:28.447 "name": "BaseBdev4", 00:11:28.447 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:28.447 "is_configured": true, 00:11:28.447 "data_offset": 2048, 00:11:28.447 "data_size": 63488 00:11:28.447 } 00:11:28.447 ] 00:11:28.447 }' 00:11:28.447 20:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.447 20:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3fc53af2-62b3-43ef-985f-c253dde65173 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 [2024-11-26 20:24:22.371100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:29.015 [2024-11-26 20:24:22.371294] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:29.015 [2024-11-26 20:24:22.371308] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:29.015 [2024-11-26 20:24:22.371585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:29.015 [2024-11-26 20:24:22.371736] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:29.015 [2024-11-26 20:24:22.371751] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:29.015 [2024-11-26 20:24:22.371858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.015 NewBaseBdev 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 [ 00:11:29.015 { 00:11:29.015 "name": "NewBaseBdev", 00:11:29.015 "aliases": [ 00:11:29.015 "3fc53af2-62b3-43ef-985f-c253dde65173" 00:11:29.015 ], 00:11:29.015 "product_name": "Malloc disk", 00:11:29.015 "block_size": 512, 00:11:29.015 "num_blocks": 65536, 00:11:29.015 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:29.015 "assigned_rate_limits": { 00:11:29.015 "rw_ios_per_sec": 0, 00:11:29.015 "rw_mbytes_per_sec": 0, 00:11:29.015 "r_mbytes_per_sec": 0, 00:11:29.015 "w_mbytes_per_sec": 0 00:11:29.015 }, 00:11:29.015 "claimed": true, 00:11:29.015 "claim_type": "exclusive_write", 00:11:29.015 "zoned": false, 00:11:29.015 "supported_io_types": { 00:11:29.015 "read": true, 00:11:29.015 "write": true, 00:11:29.015 "unmap": true, 00:11:29.015 "flush": true, 00:11:29.015 "reset": true, 00:11:29.015 "nvme_admin": false, 00:11:29.015 "nvme_io": false, 00:11:29.015 "nvme_io_md": false, 00:11:29.015 "write_zeroes": true, 00:11:29.015 "zcopy": true, 00:11:29.015 "get_zone_info": false, 00:11:29.015 "zone_management": false, 00:11:29.015 "zone_append": false, 00:11:29.015 "compare": false, 00:11:29.015 "compare_and_write": false, 00:11:29.015 "abort": true, 00:11:29.015 "seek_hole": false, 00:11:29.015 "seek_data": false, 00:11:29.015 "copy": true, 00:11:29.015 "nvme_iov_md": false 00:11:29.015 }, 00:11:29.015 "memory_domains": [ 00:11:29.015 { 00:11:29.015 "dma_device_id": "system", 00:11:29.015 "dma_device_type": 1 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.015 "dma_device_type": 2 00:11:29.015 } 00:11:29.015 ], 00:11:29.015 "driver_specific": {} 00:11:29.015 } 00:11:29.015 ] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.015 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.015 "name": "Existed_Raid", 00:11:29.016 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:29.016 "strip_size_kb": 64, 00:11:29.016 "state": "online", 00:11:29.016 "raid_level": "concat", 00:11:29.016 "superblock": true, 00:11:29.016 "num_base_bdevs": 4, 00:11:29.016 "num_base_bdevs_discovered": 4, 00:11:29.016 "num_base_bdevs_operational": 4, 00:11:29.016 "base_bdevs_list": [ 00:11:29.016 { 00:11:29.016 "name": "NewBaseBdev", 00:11:29.016 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:29.016 "is_configured": true, 00:11:29.016 "data_offset": 2048, 00:11:29.016 "data_size": 63488 00:11:29.016 }, 00:11:29.016 { 00:11:29.016 "name": "BaseBdev2", 00:11:29.016 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:29.016 "is_configured": true, 00:11:29.016 "data_offset": 2048, 00:11:29.016 "data_size": 63488 00:11:29.016 }, 00:11:29.016 { 00:11:29.016 "name": "BaseBdev3", 00:11:29.016 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:29.016 "is_configured": true, 00:11:29.016 "data_offset": 2048, 00:11:29.016 "data_size": 63488 00:11:29.016 }, 00:11:29.016 { 00:11:29.016 "name": "BaseBdev4", 00:11:29.016 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:29.016 "is_configured": true, 00:11:29.016 "data_offset": 2048, 00:11:29.016 "data_size": 63488 00:11:29.016 } 00:11:29.016 ] 00:11:29.016 }' 00:11:29.016 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.016 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.585 [2024-11-26 20:24:22.838825] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.585 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.585 "name": "Existed_Raid", 00:11:29.585 "aliases": [ 00:11:29.585 "191f172c-aa25-4691-aa55-1372fb933827" 00:11:29.585 ], 00:11:29.585 "product_name": "Raid Volume", 00:11:29.585 "block_size": 512, 00:11:29.585 "num_blocks": 253952, 00:11:29.585 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:29.585 "assigned_rate_limits": { 00:11:29.585 "rw_ios_per_sec": 0, 00:11:29.585 "rw_mbytes_per_sec": 0, 00:11:29.585 "r_mbytes_per_sec": 0, 00:11:29.585 "w_mbytes_per_sec": 0 00:11:29.585 }, 00:11:29.585 "claimed": false, 00:11:29.585 "zoned": false, 00:11:29.585 "supported_io_types": { 00:11:29.585 "read": true, 00:11:29.585 "write": true, 00:11:29.585 "unmap": true, 00:11:29.586 "flush": true, 00:11:29.586 "reset": true, 00:11:29.586 "nvme_admin": false, 00:11:29.586 "nvme_io": false, 00:11:29.586 "nvme_io_md": false, 00:11:29.586 "write_zeroes": true, 00:11:29.586 "zcopy": false, 00:11:29.586 "get_zone_info": false, 00:11:29.586 "zone_management": false, 00:11:29.586 "zone_append": false, 00:11:29.586 "compare": false, 00:11:29.586 "compare_and_write": false, 00:11:29.586 "abort": false, 00:11:29.586 "seek_hole": false, 00:11:29.586 "seek_data": false, 00:11:29.586 "copy": false, 00:11:29.586 "nvme_iov_md": false 00:11:29.586 }, 00:11:29.586 "memory_domains": [ 00:11:29.586 { 00:11:29.586 "dma_device_id": "system", 00:11:29.586 "dma_device_type": 1 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.586 "dma_device_type": 2 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "dma_device_id": "system", 00:11:29.586 "dma_device_type": 1 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.586 "dma_device_type": 2 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "dma_device_id": "system", 00:11:29.586 "dma_device_type": 1 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.586 "dma_device_type": 2 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "dma_device_id": "system", 00:11:29.586 "dma_device_type": 1 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.586 "dma_device_type": 2 00:11:29.586 } 00:11:29.586 ], 00:11:29.586 "driver_specific": { 00:11:29.586 "raid": { 00:11:29.586 "uuid": "191f172c-aa25-4691-aa55-1372fb933827", 00:11:29.586 "strip_size_kb": 64, 00:11:29.586 "state": "online", 00:11:29.586 "raid_level": "concat", 00:11:29.586 "superblock": true, 00:11:29.586 "num_base_bdevs": 4, 00:11:29.586 "num_base_bdevs_discovered": 4, 00:11:29.586 "num_base_bdevs_operational": 4, 00:11:29.586 "base_bdevs_list": [ 00:11:29.586 { 00:11:29.586 "name": "NewBaseBdev", 00:11:29.586 "uuid": "3fc53af2-62b3-43ef-985f-c253dde65173", 00:11:29.586 "is_configured": true, 00:11:29.586 "data_offset": 2048, 00:11:29.586 "data_size": 63488 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "name": "BaseBdev2", 00:11:29.586 "uuid": "80414a4d-c974-484b-87de-2ea7320e76a7", 00:11:29.586 "is_configured": true, 00:11:29.586 "data_offset": 2048, 00:11:29.586 "data_size": 63488 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "name": "BaseBdev3", 00:11:29.586 "uuid": "bfc51eac-7a35-4016-8038-fd70a8208202", 00:11:29.586 "is_configured": true, 00:11:29.586 "data_offset": 2048, 00:11:29.586 "data_size": 63488 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "name": "BaseBdev4", 00:11:29.586 "uuid": "c9dfab88-1b57-4626-9dee-979ff5262098", 00:11:29.586 "is_configured": true, 00:11:29.586 "data_offset": 2048, 00:11:29.586 "data_size": 63488 00:11:29.586 } 00:11:29.586 ] 00:11:29.586 } 00:11:29.586 } 00:11:29.586 }' 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:29.586 BaseBdev2 00:11:29.586 BaseBdev3 00:11:29.586 BaseBdev4' 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 20:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.846 [2024-11-26 20:24:23.141894] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.846 [2024-11-26 20:24:23.142019] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.846 [2024-11-26 20:24:23.142135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.846 [2024-11-26 20:24:23.142220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.846 [2024-11-26 20:24:23.142233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83296 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83296 ']' 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83296 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83296 00:11:29.846 killing process with pid 83296 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83296' 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83296 00:11:29.846 [2024-11-26 20:24:23.179602] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.846 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83296 00:11:29.846 [2024-11-26 20:24:23.248460] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.105 20:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:30.105 00:11:30.105 real 0m10.265s 00:11:30.105 user 0m17.399s 00:11:30.105 sys 0m2.117s 00:11:30.105 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.105 ************************************ 00:11:30.105 END TEST raid_state_function_test_sb 00:11:30.105 ************************************ 00:11:30.105 20:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.365 20:24:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:30.365 20:24:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:30.365 20:24:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.365 20:24:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.365 ************************************ 00:11:30.365 START TEST raid_superblock_test 00:11:30.365 ************************************ 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83955 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83955 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83955 ']' 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.365 20:24:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.365 [2024-11-26 20:24:23.807357] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:30.365 [2024-11-26 20:24:23.807652] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83955 ] 00:11:30.683 [2024-11-26 20:24:23.976531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.683 [2024-11-26 20:24:24.063028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.683 [2024-11-26 20:24:24.145434] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.683 [2024-11-26 20:24:24.145575] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.271 malloc1 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.271 [2024-11-26 20:24:24.749895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:31.271 [2024-11-26 20:24:24.750042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.271 [2024-11-26 20:24:24.750075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:31.271 [2024-11-26 20:24:24.750092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.271 [2024-11-26 20:24:24.752741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.271 [2024-11-26 20:24:24.752789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:31.271 pt1 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.271 malloc2 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.271 [2024-11-26 20:24:24.789953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:31.271 [2024-11-26 20:24:24.790052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.271 [2024-11-26 20:24:24.790082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:31.271 [2024-11-26 20:24:24.790100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.271 [2024-11-26 20:24:24.793205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.271 [2024-11-26 20:24:24.793327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:31.271 pt2 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.271 malloc3 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.271 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.530 [2024-11-26 20:24:24.825062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:31.530 [2024-11-26 20:24:24.825155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.530 [2024-11-26 20:24:24.825180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:31.530 [2024-11-26 20:24:24.825193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.530 [2024-11-26 20:24:24.827808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.530 [2024-11-26 20:24:24.827859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:31.530 pt3 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.530 malloc4 00:11:31.530 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.531 [2024-11-26 20:24:24.854917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:31.531 [2024-11-26 20:24:24.855086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.531 [2024-11-26 20:24:24.855113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:31.531 [2024-11-26 20:24:24.855129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.531 [2024-11-26 20:24:24.857737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.531 [2024-11-26 20:24:24.857794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:31.531 pt4 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.531 [2024-11-26 20:24:24.867018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:31.531 [2024-11-26 20:24:24.869259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:31.531 [2024-11-26 20:24:24.869337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:31.531 [2024-11-26 20:24:24.869413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:31.531 [2024-11-26 20:24:24.869610] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:31.531 [2024-11-26 20:24:24.869639] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:31.531 [2024-11-26 20:24:24.869971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:31.531 [2024-11-26 20:24:24.870154] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:31.531 [2024-11-26 20:24:24.870166] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:31.531 [2024-11-26 20:24:24.870343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.531 "name": "raid_bdev1", 00:11:31.531 "uuid": "f9995c6f-c1d7-436f-8061-1e3c31ceb6af", 00:11:31.531 "strip_size_kb": 64, 00:11:31.531 "state": "online", 00:11:31.531 "raid_level": "concat", 00:11:31.531 "superblock": true, 00:11:31.531 "num_base_bdevs": 4, 00:11:31.531 "num_base_bdevs_discovered": 4, 00:11:31.531 "num_base_bdevs_operational": 4, 00:11:31.531 "base_bdevs_list": [ 00:11:31.531 { 00:11:31.531 "name": "pt1", 00:11:31.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:31.531 "is_configured": true, 00:11:31.531 "data_offset": 2048, 00:11:31.531 "data_size": 63488 00:11:31.531 }, 00:11:31.531 { 00:11:31.531 "name": "pt2", 00:11:31.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.531 "is_configured": true, 00:11:31.531 "data_offset": 2048, 00:11:31.531 "data_size": 63488 00:11:31.531 }, 00:11:31.531 { 00:11:31.531 "name": "pt3", 00:11:31.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.531 "is_configured": true, 00:11:31.531 "data_offset": 2048, 00:11:31.531 "data_size": 63488 00:11:31.531 }, 00:11:31.531 { 00:11:31.531 "name": "pt4", 00:11:31.531 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:31.531 "is_configured": true, 00:11:31.531 "data_offset": 2048, 00:11:31.531 "data_size": 63488 00:11:31.531 } 00:11:31.531 ] 00:11:31.531 }' 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.531 20:24:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.098 [2024-11-26 20:24:25.358592] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.098 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.098 "name": "raid_bdev1", 00:11:32.098 "aliases": [ 00:11:32.098 "f9995c6f-c1d7-436f-8061-1e3c31ceb6af" 00:11:32.098 ], 00:11:32.098 "product_name": "Raid Volume", 00:11:32.098 "block_size": 512, 00:11:32.098 "num_blocks": 253952, 00:11:32.098 "uuid": "f9995c6f-c1d7-436f-8061-1e3c31ceb6af", 00:11:32.098 "assigned_rate_limits": { 00:11:32.098 "rw_ios_per_sec": 0, 00:11:32.098 "rw_mbytes_per_sec": 0, 00:11:32.098 "r_mbytes_per_sec": 0, 00:11:32.098 "w_mbytes_per_sec": 0 00:11:32.098 }, 00:11:32.098 "claimed": false, 00:11:32.098 "zoned": false, 00:11:32.098 "supported_io_types": { 00:11:32.098 "read": true, 00:11:32.098 "write": true, 00:11:32.098 "unmap": true, 00:11:32.098 "flush": true, 00:11:32.098 "reset": true, 00:11:32.098 "nvme_admin": false, 00:11:32.098 "nvme_io": false, 00:11:32.098 "nvme_io_md": false, 00:11:32.098 "write_zeroes": true, 00:11:32.098 "zcopy": false, 00:11:32.098 "get_zone_info": false, 00:11:32.098 "zone_management": false, 00:11:32.098 "zone_append": false, 00:11:32.098 "compare": false, 00:11:32.098 "compare_and_write": false, 00:11:32.098 "abort": false, 00:11:32.098 "seek_hole": false, 00:11:32.098 "seek_data": false, 00:11:32.098 "copy": false, 00:11:32.098 "nvme_iov_md": false 00:11:32.098 }, 00:11:32.098 "memory_domains": [ 00:11:32.098 { 00:11:32.098 "dma_device_id": "system", 00:11:32.098 "dma_device_type": 1 00:11:32.098 }, 00:11:32.098 { 00:11:32.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.098 "dma_device_type": 2 00:11:32.098 }, 00:11:32.098 { 00:11:32.099 "dma_device_id": "system", 00:11:32.099 "dma_device_type": 1 00:11:32.099 }, 00:11:32.099 { 00:11:32.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.099 "dma_device_type": 2 00:11:32.099 }, 00:11:32.099 { 00:11:32.099 "dma_device_id": "system", 00:11:32.099 "dma_device_type": 1 00:11:32.099 }, 00:11:32.099 { 00:11:32.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.099 "dma_device_type": 2 00:11:32.099 }, 00:11:32.099 { 00:11:32.099 "dma_device_id": "system", 00:11:32.099 "dma_device_type": 1 00:11:32.099 }, 00:11:32.099 { 00:11:32.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.099 "dma_device_type": 2 00:11:32.099 } 00:11:32.099 ], 00:11:32.099 "driver_specific": { 00:11:32.099 "raid": { 00:11:32.099 "uuid": "f9995c6f-c1d7-436f-8061-1e3c31ceb6af", 00:11:32.099 "strip_size_kb": 64, 00:11:32.099 "state": "online", 00:11:32.099 "raid_level": "concat", 00:11:32.099 "superblock": true, 00:11:32.099 "num_base_bdevs": 4, 00:11:32.099 "num_base_bdevs_discovered": 4, 00:11:32.099 "num_base_bdevs_operational": 4, 00:11:32.099 "base_bdevs_list": [ 00:11:32.099 { 00:11:32.099 "name": "pt1", 00:11:32.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.099 "is_configured": true, 00:11:32.099 "data_offset": 2048, 00:11:32.099 "data_size": 63488 00:11:32.099 }, 00:11:32.099 { 00:11:32.099 "name": "pt2", 00:11:32.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.099 "is_configured": true, 00:11:32.099 "data_offset": 2048, 00:11:32.099 "data_size": 63488 00:11:32.099 }, 00:11:32.099 { 00:11:32.099 "name": "pt3", 00:11:32.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.099 "is_configured": true, 00:11:32.099 "data_offset": 2048, 00:11:32.099 "data_size": 63488 00:11:32.099 }, 00:11:32.099 { 00:11:32.099 "name": "pt4", 00:11:32.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:32.099 "is_configured": true, 00:11:32.099 "data_offset": 2048, 00:11:32.099 "data_size": 63488 00:11:32.099 } 00:11:32.099 ] 00:11:32.099 } 00:11:32.099 } 00:11:32.099 }' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:32.099 pt2 00:11:32.099 pt3 00:11:32.099 pt4' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.099 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:32.359 [2024-11-26 20:24:25.654194] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f9995c6f-c1d7-436f-8061-1e3c31ceb6af 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f9995c6f-c1d7-436f-8061-1e3c31ceb6af ']' 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 [2024-11-26 20:24:25.701777] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.359 [2024-11-26 20:24:25.701816] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.359 [2024-11-26 20:24:25.701925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.359 [2024-11-26 20:24:25.702025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.359 [2024-11-26 20:24:25.702040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 [2024-11-26 20:24:25.877635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:32.359 [2024-11-26 20:24:25.880293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:32.359 [2024-11-26 20:24:25.880461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:32.359 [2024-11-26 20:24:25.880551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:32.359 [2024-11-26 20:24:25.880715] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:32.359 [2024-11-26 20:24:25.880885] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:32.359 [2024-11-26 20:24:25.880999] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:32.359 [2024-11-26 20:24:25.881094] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:32.359 [2024-11-26 20:24:25.881195] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.359 [2024-11-26 20:24:25.881248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:32.359 request: 00:11:32.359 { 00:11:32.359 "name": "raid_bdev1", 00:11:32.359 "raid_level": "concat", 00:11:32.359 "base_bdevs": [ 00:11:32.359 "malloc1", 00:11:32.359 "malloc2", 00:11:32.359 "malloc3", 00:11:32.359 "malloc4" 00:11:32.359 ], 00:11:32.359 "strip_size_kb": 64, 00:11:32.359 "superblock": false, 00:11:32.359 "method": "bdev_raid_create", 00:11:32.359 "req_id": 1 00:11:32.359 } 00:11:32.359 Got JSON-RPC error response 00:11:32.359 response: 00:11:32.359 { 00:11:32.359 "code": -17, 00:11:32.359 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:32.359 } 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.359 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.618 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:32.618 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:32.618 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.618 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.618 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.618 [2024-11-26 20:24:25.937517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.618 [2024-11-26 20:24:25.937590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.618 [2024-11-26 20:24:25.937623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:32.618 [2024-11-26 20:24:25.937634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.618 [2024-11-26 20:24:25.940113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.618 [2024-11-26 20:24:25.940158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.618 [2024-11-26 20:24:25.940253] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:32.618 [2024-11-26 20:24:25.940311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.618 pt1 00:11:32.618 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.618 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.619 "name": "raid_bdev1", 00:11:32.619 "uuid": "f9995c6f-c1d7-436f-8061-1e3c31ceb6af", 00:11:32.619 "strip_size_kb": 64, 00:11:32.619 "state": "configuring", 00:11:32.619 "raid_level": "concat", 00:11:32.619 "superblock": true, 00:11:32.619 "num_base_bdevs": 4, 00:11:32.619 "num_base_bdevs_discovered": 1, 00:11:32.619 "num_base_bdevs_operational": 4, 00:11:32.619 "base_bdevs_list": [ 00:11:32.619 { 00:11:32.619 "name": "pt1", 00:11:32.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.619 "is_configured": true, 00:11:32.619 "data_offset": 2048, 00:11:32.619 "data_size": 63488 00:11:32.619 }, 00:11:32.619 { 00:11:32.619 "name": null, 00:11:32.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.619 "is_configured": false, 00:11:32.619 "data_offset": 2048, 00:11:32.619 "data_size": 63488 00:11:32.619 }, 00:11:32.619 { 00:11:32.619 "name": null, 00:11:32.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.619 "is_configured": false, 00:11:32.619 "data_offset": 2048, 00:11:32.619 "data_size": 63488 00:11:32.619 }, 00:11:32.619 { 00:11:32.619 "name": null, 00:11:32.619 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:32.619 "is_configured": false, 00:11:32.619 "data_offset": 2048, 00:11:32.619 "data_size": 63488 00:11:32.619 } 00:11:32.619 ] 00:11:32.619 }' 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.619 20:24:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.877 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:32.877 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:32.877 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.877 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.877 [2024-11-26 20:24:26.404791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:32.877 [2024-11-26 20:24:26.404880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.877 [2024-11-26 20:24:26.404905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:32.878 [2024-11-26 20:24:26.404916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.878 [2024-11-26 20:24:26.405365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.878 [2024-11-26 20:24:26.405384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:32.878 [2024-11-26 20:24:26.405467] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:32.878 [2024-11-26 20:24:26.405490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:32.878 pt2 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.878 [2024-11-26 20:24:26.416785] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.878 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.136 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.136 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.136 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.136 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.136 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.136 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.136 "name": "raid_bdev1", 00:11:33.136 "uuid": "f9995c6f-c1d7-436f-8061-1e3c31ceb6af", 00:11:33.136 "strip_size_kb": 64, 00:11:33.136 "state": "configuring", 00:11:33.136 "raid_level": "concat", 00:11:33.136 "superblock": true, 00:11:33.136 "num_base_bdevs": 4, 00:11:33.136 "num_base_bdevs_discovered": 1, 00:11:33.136 "num_base_bdevs_operational": 4, 00:11:33.136 "base_bdevs_list": [ 00:11:33.136 { 00:11:33.136 "name": "pt1", 00:11:33.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.136 "is_configured": true, 00:11:33.136 "data_offset": 2048, 00:11:33.136 "data_size": 63488 00:11:33.136 }, 00:11:33.136 { 00:11:33.136 "name": null, 00:11:33.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.136 "is_configured": false, 00:11:33.136 "data_offset": 0, 00:11:33.136 "data_size": 63488 00:11:33.136 }, 00:11:33.136 { 00:11:33.136 "name": null, 00:11:33.136 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.136 "is_configured": false, 00:11:33.136 "data_offset": 2048, 00:11:33.136 "data_size": 63488 00:11:33.136 }, 00:11:33.136 { 00:11:33.136 "name": null, 00:11:33.136 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.136 "is_configured": false, 00:11:33.136 "data_offset": 2048, 00:11:33.136 "data_size": 63488 00:11:33.136 } 00:11:33.136 ] 00:11:33.136 }' 00:11:33.136 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.136 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.394 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:33.394 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.394 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.395 [2024-11-26 20:24:26.880142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.395 [2024-11-26 20:24:26.880301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.395 [2024-11-26 20:24:26.880352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:33.395 [2024-11-26 20:24:26.880391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.395 [2024-11-26 20:24:26.880915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.395 [2024-11-26 20:24:26.880981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.395 [2024-11-26 20:24:26.881099] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:33.395 [2024-11-26 20:24:26.881157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.395 pt2 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.395 [2024-11-26 20:24:26.892068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:33.395 [2024-11-26 20:24:26.892192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.395 [2024-11-26 20:24:26.892244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:33.395 [2024-11-26 20:24:26.892281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.395 [2024-11-26 20:24:26.892755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.395 [2024-11-26 20:24:26.892831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:33.395 [2024-11-26 20:24:26.892940] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:33.395 [2024-11-26 20:24:26.892998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:33.395 pt3 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.395 [2024-11-26 20:24:26.904046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:33.395 [2024-11-26 20:24:26.904121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.395 [2024-11-26 20:24:26.904140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:33.395 [2024-11-26 20:24:26.904150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.395 [2024-11-26 20:24:26.904514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.395 [2024-11-26 20:24:26.904534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:33.395 [2024-11-26 20:24:26.904610] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:33.395 [2024-11-26 20:24:26.904652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:33.395 [2024-11-26 20:24:26.904780] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:33.395 [2024-11-26 20:24:26.904796] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:33.395 [2024-11-26 20:24:26.905048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:33.395 [2024-11-26 20:24:26.905178] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:33.395 [2024-11-26 20:24:26.905189] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:33.395 [2024-11-26 20:24:26.905295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.395 pt4 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.395 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.653 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.653 "name": "raid_bdev1", 00:11:33.653 "uuid": "f9995c6f-c1d7-436f-8061-1e3c31ceb6af", 00:11:33.653 "strip_size_kb": 64, 00:11:33.653 "state": "online", 00:11:33.653 "raid_level": "concat", 00:11:33.653 "superblock": true, 00:11:33.653 "num_base_bdevs": 4, 00:11:33.653 "num_base_bdevs_discovered": 4, 00:11:33.653 "num_base_bdevs_operational": 4, 00:11:33.653 "base_bdevs_list": [ 00:11:33.653 { 00:11:33.653 "name": "pt1", 00:11:33.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.653 "is_configured": true, 00:11:33.653 "data_offset": 2048, 00:11:33.653 "data_size": 63488 00:11:33.653 }, 00:11:33.653 { 00:11:33.653 "name": "pt2", 00:11:33.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.653 "is_configured": true, 00:11:33.653 "data_offset": 2048, 00:11:33.653 "data_size": 63488 00:11:33.653 }, 00:11:33.653 { 00:11:33.653 "name": "pt3", 00:11:33.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.653 "is_configured": true, 00:11:33.653 "data_offset": 2048, 00:11:33.653 "data_size": 63488 00:11:33.653 }, 00:11:33.653 { 00:11:33.653 "name": "pt4", 00:11:33.653 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.653 "is_configured": true, 00:11:33.653 "data_offset": 2048, 00:11:33.653 "data_size": 63488 00:11:33.653 } 00:11:33.653 ] 00:11:33.653 }' 00:11:33.653 20:24:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.653 20:24:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.913 [2024-11-26 20:24:27.375673] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.913 "name": "raid_bdev1", 00:11:33.913 "aliases": [ 00:11:33.913 "f9995c6f-c1d7-436f-8061-1e3c31ceb6af" 00:11:33.913 ], 00:11:33.913 "product_name": "Raid Volume", 00:11:33.913 "block_size": 512, 00:11:33.913 "num_blocks": 253952, 00:11:33.913 "uuid": "f9995c6f-c1d7-436f-8061-1e3c31ceb6af", 00:11:33.913 "assigned_rate_limits": { 00:11:33.913 "rw_ios_per_sec": 0, 00:11:33.913 "rw_mbytes_per_sec": 0, 00:11:33.913 "r_mbytes_per_sec": 0, 00:11:33.913 "w_mbytes_per_sec": 0 00:11:33.913 }, 00:11:33.913 "claimed": false, 00:11:33.913 "zoned": false, 00:11:33.913 "supported_io_types": { 00:11:33.913 "read": true, 00:11:33.913 "write": true, 00:11:33.913 "unmap": true, 00:11:33.913 "flush": true, 00:11:33.913 "reset": true, 00:11:33.913 "nvme_admin": false, 00:11:33.913 "nvme_io": false, 00:11:33.913 "nvme_io_md": false, 00:11:33.913 "write_zeroes": true, 00:11:33.913 "zcopy": false, 00:11:33.913 "get_zone_info": false, 00:11:33.913 "zone_management": false, 00:11:33.913 "zone_append": false, 00:11:33.913 "compare": false, 00:11:33.913 "compare_and_write": false, 00:11:33.913 "abort": false, 00:11:33.913 "seek_hole": false, 00:11:33.913 "seek_data": false, 00:11:33.913 "copy": false, 00:11:33.913 "nvme_iov_md": false 00:11:33.913 }, 00:11:33.913 "memory_domains": [ 00:11:33.913 { 00:11:33.913 "dma_device_id": "system", 00:11:33.913 "dma_device_type": 1 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.913 "dma_device_type": 2 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "dma_device_id": "system", 00:11:33.913 "dma_device_type": 1 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.913 "dma_device_type": 2 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "dma_device_id": "system", 00:11:33.913 "dma_device_type": 1 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.913 "dma_device_type": 2 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "dma_device_id": "system", 00:11:33.913 "dma_device_type": 1 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.913 "dma_device_type": 2 00:11:33.913 } 00:11:33.913 ], 00:11:33.913 "driver_specific": { 00:11:33.913 "raid": { 00:11:33.913 "uuid": "f9995c6f-c1d7-436f-8061-1e3c31ceb6af", 00:11:33.913 "strip_size_kb": 64, 00:11:33.913 "state": "online", 00:11:33.913 "raid_level": "concat", 00:11:33.913 "superblock": true, 00:11:33.913 "num_base_bdevs": 4, 00:11:33.913 "num_base_bdevs_discovered": 4, 00:11:33.913 "num_base_bdevs_operational": 4, 00:11:33.913 "base_bdevs_list": [ 00:11:33.913 { 00:11:33.913 "name": "pt1", 00:11:33.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.913 "is_configured": true, 00:11:33.913 "data_offset": 2048, 00:11:33.913 "data_size": 63488 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "name": "pt2", 00:11:33.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.913 "is_configured": true, 00:11:33.913 "data_offset": 2048, 00:11:33.913 "data_size": 63488 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "name": "pt3", 00:11:33.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.913 "is_configured": true, 00:11:33.913 "data_offset": 2048, 00:11:33.913 "data_size": 63488 00:11:33.913 }, 00:11:33.913 { 00:11:33.913 "name": "pt4", 00:11:33.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.913 "is_configured": true, 00:11:33.913 "data_offset": 2048, 00:11:33.913 "data_size": 63488 00:11:33.913 } 00:11:33.913 ] 00:11:33.913 } 00:11:33.913 } 00:11:33.913 }' 00:11:33.913 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.170 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:34.170 pt2 00:11:34.170 pt3 00:11:34.170 pt4' 00:11:34.170 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.170 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.171 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.171 [2024-11-26 20:24:27.719160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f9995c6f-c1d7-436f-8061-1e3c31ceb6af '!=' f9995c6f-c1d7-436f-8061-1e3c31ceb6af ']' 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83955 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83955 ']' 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83955 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83955 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83955' 00:11:34.430 killing process with pid 83955 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83955 00:11:34.430 [2024-11-26 20:24:27.797857] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.430 20:24:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83955 00:11:34.430 [2024-11-26 20:24:27.798028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.430 [2024-11-26 20:24:27.798156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.430 [2024-11-26 20:24:27.798213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:34.430 [2024-11-26 20:24:27.866967] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.689 20:24:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:34.689 00:11:34.689 real 0m4.522s 00:11:34.689 user 0m6.995s 00:11:34.689 sys 0m1.020s 00:11:34.689 20:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.689 20:24:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.689 ************************************ 00:11:34.689 END TEST raid_superblock_test 00:11:34.689 ************************************ 00:11:34.948 20:24:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:34.948 20:24:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:34.948 20:24:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.948 20:24:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.948 ************************************ 00:11:34.948 START TEST raid_read_error_test 00:11:34.948 ************************************ 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KbQRO87h69 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84209 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84209 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 84209 ']' 00:11:34.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.948 20:24:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.948 [2024-11-26 20:24:28.436384] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:34.948 [2024-11-26 20:24:28.436950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84209 ] 00:11:35.207 [2024-11-26 20:24:28.608520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.207 [2024-11-26 20:24:28.693657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.465 [2024-11-26 20:24:28.774642] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.465 [2024-11-26 20:24:28.774677] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 BaseBdev1_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 true 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 [2024-11-26 20:24:29.336731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.033 [2024-11-26 20:24:29.336849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.033 [2024-11-26 20:24:29.336881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.033 [2024-11-26 20:24:29.336901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.033 [2024-11-26 20:24:29.339614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.033 [2024-11-26 20:24:29.339668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.033 BaseBdev1 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 BaseBdev2_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 true 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 [2024-11-26 20:24:29.382138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.033 [2024-11-26 20:24:29.382214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.033 [2024-11-26 20:24:29.382243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.033 [2024-11-26 20:24:29.382254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.033 [2024-11-26 20:24:29.384795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.033 [2024-11-26 20:24:29.384841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.033 BaseBdev2 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 BaseBdev3_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 true 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 [2024-11-26 20:24:29.420434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:36.033 [2024-11-26 20:24:29.420552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.033 [2024-11-26 20:24:29.420599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:36.033 [2024-11-26 20:24:29.420698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.033 [2024-11-26 20:24:29.423169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.033 [2024-11-26 20:24:29.423258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.033 BaseBdev3 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 BaseBdev4_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 true 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 [2024-11-26 20:24:29.450071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:36.033 [2024-11-26 20:24:29.450178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.033 [2024-11-26 20:24:29.450210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:36.033 [2024-11-26 20:24:29.450221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.033 [2024-11-26 20:24:29.452691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.033 [2024-11-26 20:24:29.452731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:36.033 BaseBdev4 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.033 [2024-11-26 20:24:29.462126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.033 [2024-11-26 20:24:29.464306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.033 [2024-11-26 20:24:29.464407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.033 [2024-11-26 20:24:29.464470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.033 [2024-11-26 20:24:29.464769] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:36.033 [2024-11-26 20:24:29.464785] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:36.033 [2024-11-26 20:24:29.465088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:36.033 [2024-11-26 20:24:29.465255] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:36.033 [2024-11-26 20:24:29.465271] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:36.033 [2024-11-26 20:24:29.465441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.033 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.034 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.034 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.034 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.034 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.034 "name": "raid_bdev1", 00:11:36.034 "uuid": "9d9daef5-7428-40c4-9ee6-19042aae58c7", 00:11:36.034 "strip_size_kb": 64, 00:11:36.034 "state": "online", 00:11:36.034 "raid_level": "concat", 00:11:36.034 "superblock": true, 00:11:36.034 "num_base_bdevs": 4, 00:11:36.034 "num_base_bdevs_discovered": 4, 00:11:36.034 "num_base_bdevs_operational": 4, 00:11:36.034 "base_bdevs_list": [ 00:11:36.034 { 00:11:36.034 "name": "BaseBdev1", 00:11:36.034 "uuid": "625aff68-1a74-5bb4-b31d-68fd586bd8a0", 00:11:36.034 "is_configured": true, 00:11:36.034 "data_offset": 2048, 00:11:36.034 "data_size": 63488 00:11:36.034 }, 00:11:36.034 { 00:11:36.034 "name": "BaseBdev2", 00:11:36.034 "uuid": "d60b8bba-36d2-5b02-87fe-e841bad0d394", 00:11:36.034 "is_configured": true, 00:11:36.034 "data_offset": 2048, 00:11:36.034 "data_size": 63488 00:11:36.034 }, 00:11:36.034 { 00:11:36.034 "name": "BaseBdev3", 00:11:36.034 "uuid": "e998095a-14be-5f32-9fa3-5823394dba53", 00:11:36.034 "is_configured": true, 00:11:36.034 "data_offset": 2048, 00:11:36.034 "data_size": 63488 00:11:36.034 }, 00:11:36.034 { 00:11:36.034 "name": "BaseBdev4", 00:11:36.034 "uuid": "d529bf98-0a14-53eb-8eec-b407e1d53335", 00:11:36.034 "is_configured": true, 00:11:36.034 "data_offset": 2048, 00:11:36.034 "data_size": 63488 00:11:36.034 } 00:11:36.034 ] 00:11:36.034 }' 00:11:36.034 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.034 20:24:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.399 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:36.399 20:24:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:36.670 [2024-11-26 20:24:29.993650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.604 "name": "raid_bdev1", 00:11:37.604 "uuid": "9d9daef5-7428-40c4-9ee6-19042aae58c7", 00:11:37.604 "strip_size_kb": 64, 00:11:37.604 "state": "online", 00:11:37.604 "raid_level": "concat", 00:11:37.604 "superblock": true, 00:11:37.604 "num_base_bdevs": 4, 00:11:37.604 "num_base_bdevs_discovered": 4, 00:11:37.604 "num_base_bdevs_operational": 4, 00:11:37.604 "base_bdevs_list": [ 00:11:37.604 { 00:11:37.604 "name": "BaseBdev1", 00:11:37.604 "uuid": "625aff68-1a74-5bb4-b31d-68fd586bd8a0", 00:11:37.604 "is_configured": true, 00:11:37.604 "data_offset": 2048, 00:11:37.604 "data_size": 63488 00:11:37.604 }, 00:11:37.604 { 00:11:37.604 "name": "BaseBdev2", 00:11:37.604 "uuid": "d60b8bba-36d2-5b02-87fe-e841bad0d394", 00:11:37.604 "is_configured": true, 00:11:37.604 "data_offset": 2048, 00:11:37.604 "data_size": 63488 00:11:37.604 }, 00:11:37.604 { 00:11:37.604 "name": "BaseBdev3", 00:11:37.604 "uuid": "e998095a-14be-5f32-9fa3-5823394dba53", 00:11:37.604 "is_configured": true, 00:11:37.604 "data_offset": 2048, 00:11:37.604 "data_size": 63488 00:11:37.604 }, 00:11:37.604 { 00:11:37.604 "name": "BaseBdev4", 00:11:37.604 "uuid": "d529bf98-0a14-53eb-8eec-b407e1d53335", 00:11:37.604 "is_configured": true, 00:11:37.604 "data_offset": 2048, 00:11:37.604 "data_size": 63488 00:11:37.604 } 00:11:37.604 ] 00:11:37.604 }' 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.604 20:24:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.169 [2024-11-26 20:24:31.432235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.169 [2024-11-26 20:24:31.432333] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.169 [2024-11-26 20:24:31.435313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.169 [2024-11-26 20:24:31.435419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.169 [2024-11-26 20:24:31.435477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.169 [2024-11-26 20:24:31.435488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:38.169 { 00:11:38.169 "results": [ 00:11:38.169 { 00:11:38.169 "job": "raid_bdev1", 00:11:38.169 "core_mask": "0x1", 00:11:38.169 "workload": "randrw", 00:11:38.169 "percentage": 50, 00:11:38.169 "status": "finished", 00:11:38.169 "queue_depth": 1, 00:11:38.169 "io_size": 131072, 00:11:38.169 "runtime": 1.439337, 00:11:38.169 "iops": 12835.076149643899, 00:11:38.169 "mibps": 1604.3845187054874, 00:11:38.169 "io_failed": 1, 00:11:38.169 "io_timeout": 0, 00:11:38.169 "avg_latency_us": 108.58583895385598, 00:11:38.169 "min_latency_us": 27.612227074235808, 00:11:38.169 "max_latency_us": 1731.4096069868995 00:11:38.169 } 00:11:38.169 ], 00:11:38.169 "core_count": 1 00:11:38.169 } 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84209 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 84209 ']' 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 84209 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84209 00:11:38.169 killing process with pid 84209 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84209' 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 84209 00:11:38.169 [2024-11-26 20:24:31.462575] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.169 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 84209 00:11:38.169 [2024-11-26 20:24:31.521425] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KbQRO87h69 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.426 ************************************ 00:11:38.426 END TEST raid_read_error_test 00:11:38.426 ************************************ 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:11:38.426 00:11:38.426 real 0m3.590s 00:11:38.426 user 0m4.424s 00:11:38.426 sys 0m0.663s 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.426 20:24:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.426 20:24:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:38.426 20:24:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:38.426 20:24:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.426 20:24:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.426 ************************************ 00:11:38.426 START TEST raid_write_error_test 00:11:38.426 ************************************ 00:11:38.426 20:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:38.426 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:38.426 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:38.426 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:38.427 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.C76FiAW7s2 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84338 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84338 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84338 ']' 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.738 20:24:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.738 [2024-11-26 20:24:32.054978] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:38.738 [2024-11-26 20:24:32.055128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84338 ] 00:11:38.738 [2024-11-26 20:24:32.206897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.995 [2024-11-26 20:24:32.295471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.995 [2024-11-26 20:24:32.373804] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.995 [2024-11-26 20:24:32.373970] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.561 BaseBdev1_malloc 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.561 true 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.561 [2024-11-26 20:24:33.065014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:39.561 [2024-11-26 20:24:33.065106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.561 [2024-11-26 20:24:33.065161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:39.561 [2024-11-26 20:24:33.065174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.561 [2024-11-26 20:24:33.067915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.561 [2024-11-26 20:24:33.067988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:39.561 BaseBdev1 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.561 BaseBdev2_malloc 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.561 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 true 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 [2024-11-26 20:24:33.119793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:39.820 [2024-11-26 20:24:33.119891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.820 [2024-11-26 20:24:33.119945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:39.820 [2024-11-26 20:24:33.119958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.820 [2024-11-26 20:24:33.122687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.820 [2024-11-26 20:24:33.122748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:39.820 BaseBdev2 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 BaseBdev3_malloc 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 true 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 [2024-11-26 20:24:33.164421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:39.820 [2024-11-26 20:24:33.164529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.820 [2024-11-26 20:24:33.164567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:39.820 [2024-11-26 20:24:33.164581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.820 [2024-11-26 20:24:33.167333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.820 [2024-11-26 20:24:33.167403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:39.820 BaseBdev3 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 BaseBdev4_malloc 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 true 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 [2024-11-26 20:24:33.212651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:39.820 [2024-11-26 20:24:33.212771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.820 [2024-11-26 20:24:33.212825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:39.820 [2024-11-26 20:24:33.212846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.820 [2024-11-26 20:24:33.216195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.820 [2024-11-26 20:24:33.216290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:39.820 BaseBdev4 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.820 [2024-11-26 20:24:33.224756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.820 [2024-11-26 20:24:33.227818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.820 [2024-11-26 20:24:33.228017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.820 [2024-11-26 20:24:33.228135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:39.820 [2024-11-26 20:24:33.228532] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:39.820 [2024-11-26 20:24:33.228567] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:39.820 [2024-11-26 20:24:33.229092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:39.820 [2024-11-26 20:24:33.229399] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:39.820 [2024-11-26 20:24:33.229428] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:39.820 [2024-11-26 20:24:33.229836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:39.820 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.821 "name": "raid_bdev1", 00:11:39.821 "uuid": "a5b02bf8-c146-4596-8da4-f09c6e431058", 00:11:39.821 "strip_size_kb": 64, 00:11:39.821 "state": "online", 00:11:39.821 "raid_level": "concat", 00:11:39.821 "superblock": true, 00:11:39.821 "num_base_bdevs": 4, 00:11:39.821 "num_base_bdevs_discovered": 4, 00:11:39.821 "num_base_bdevs_operational": 4, 00:11:39.821 "base_bdevs_list": [ 00:11:39.821 { 00:11:39.821 "name": "BaseBdev1", 00:11:39.821 "uuid": "d15d23db-fda7-53d1-9b00-352d410a079d", 00:11:39.821 "is_configured": true, 00:11:39.821 "data_offset": 2048, 00:11:39.821 "data_size": 63488 00:11:39.821 }, 00:11:39.821 { 00:11:39.821 "name": "BaseBdev2", 00:11:39.821 "uuid": "56d49794-388a-54bb-8dae-5e6c4cad7290", 00:11:39.821 "is_configured": true, 00:11:39.821 "data_offset": 2048, 00:11:39.821 "data_size": 63488 00:11:39.821 }, 00:11:39.821 { 00:11:39.821 "name": "BaseBdev3", 00:11:39.821 "uuid": "c1f50b22-11bb-541a-8f02-c9fea59c9fa5", 00:11:39.821 "is_configured": true, 00:11:39.821 "data_offset": 2048, 00:11:39.821 "data_size": 63488 00:11:39.821 }, 00:11:39.821 { 00:11:39.821 "name": "BaseBdev4", 00:11:39.821 "uuid": "d2f44f8d-5b62-58d1-a8d2-efdac6f3e3b6", 00:11:39.821 "is_configured": true, 00:11:39.821 "data_offset": 2048, 00:11:39.821 "data_size": 63488 00:11:39.821 } 00:11:39.821 ] 00:11:39.821 }' 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.821 20:24:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.386 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:40.386 20:24:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:40.386 [2024-11-26 20:24:33.780345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.322 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.322 "name": "raid_bdev1", 00:11:41.322 "uuid": "a5b02bf8-c146-4596-8da4-f09c6e431058", 00:11:41.322 "strip_size_kb": 64, 00:11:41.322 "state": "online", 00:11:41.322 "raid_level": "concat", 00:11:41.322 "superblock": true, 00:11:41.322 "num_base_bdevs": 4, 00:11:41.322 "num_base_bdevs_discovered": 4, 00:11:41.322 "num_base_bdevs_operational": 4, 00:11:41.322 "base_bdevs_list": [ 00:11:41.322 { 00:11:41.322 "name": "BaseBdev1", 00:11:41.322 "uuid": "d15d23db-fda7-53d1-9b00-352d410a079d", 00:11:41.322 "is_configured": true, 00:11:41.322 "data_offset": 2048, 00:11:41.322 "data_size": 63488 00:11:41.322 }, 00:11:41.322 { 00:11:41.322 "name": "BaseBdev2", 00:11:41.322 "uuid": "56d49794-388a-54bb-8dae-5e6c4cad7290", 00:11:41.322 "is_configured": true, 00:11:41.323 "data_offset": 2048, 00:11:41.323 "data_size": 63488 00:11:41.323 }, 00:11:41.323 { 00:11:41.323 "name": "BaseBdev3", 00:11:41.323 "uuid": "c1f50b22-11bb-541a-8f02-c9fea59c9fa5", 00:11:41.323 "is_configured": true, 00:11:41.323 "data_offset": 2048, 00:11:41.323 "data_size": 63488 00:11:41.323 }, 00:11:41.323 { 00:11:41.323 "name": "BaseBdev4", 00:11:41.323 "uuid": "d2f44f8d-5b62-58d1-a8d2-efdac6f3e3b6", 00:11:41.323 "is_configured": true, 00:11:41.323 "data_offset": 2048, 00:11:41.323 "data_size": 63488 00:11:41.323 } 00:11:41.323 ] 00:11:41.323 }' 00:11:41.323 20:24:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.323 20:24:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.582 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.582 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.582 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.582 [2024-11-26 20:24:35.119821] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.582 [2024-11-26 20:24:35.119876] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.582 [2024-11-26 20:24:35.122988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.582 [2024-11-26 20:24:35.123057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.582 [2024-11-26 20:24:35.123112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.582 [2024-11-26 20:24:35.123124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:41.582 { 00:11:41.582 "results": [ 00:11:41.582 { 00:11:41.582 "job": "raid_bdev1", 00:11:41.582 "core_mask": "0x1", 00:11:41.582 "workload": "randrw", 00:11:41.582 "percentage": 50, 00:11:41.582 "status": "finished", 00:11:41.582 "queue_depth": 1, 00:11:41.582 "io_size": 131072, 00:11:41.582 "runtime": 1.339681, 00:11:41.582 "iops": 11684.871249200369, 00:11:41.582 "mibps": 1460.608906150046, 00:11:41.582 "io_failed": 1, 00:11:41.582 "io_timeout": 0, 00:11:41.582 "avg_latency_us": 119.50996417010344, 00:11:41.582 "min_latency_us": 33.08995633187773, 00:11:41.582 "max_latency_us": 1774.3371179039302 00:11:41.582 } 00:11:41.582 ], 00:11:41.582 "core_count": 1 00:11:41.582 } 00:11:41.582 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.582 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84338 00:11:41.582 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84338 ']' 00:11:41.582 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84338 00:11:41.582 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:41.842 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.842 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84338 00:11:41.842 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.842 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.842 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84338' 00:11:41.842 killing process with pid 84338 00:11:41.842 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84338 00:11:41.842 [2024-11-26 20:24:35.167901] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.842 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84338 00:11:41.842 [2024-11-26 20:24:35.228580] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.C76FiAW7s2 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:42.102 00:11:42.102 real 0m3.659s 00:11:42.102 user 0m4.479s 00:11:42.102 sys 0m0.711s 00:11:42.102 ************************************ 00:11:42.102 END TEST raid_write_error_test 00:11:42.102 ************************************ 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.102 20:24:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 20:24:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:42.362 20:24:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:42.362 20:24:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:42.362 20:24:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.362 20:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 ************************************ 00:11:42.362 START TEST raid_state_function_test 00:11:42.362 ************************************ 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:42.362 Process raid pid: 84478 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84478 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84478' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84478 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84478 ']' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.362 20:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.362 [2024-11-26 20:24:35.799208] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:42.362 [2024-11-26 20:24:35.799847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.621 [2024-11-26 20:24:35.957337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.621 [2024-11-26 20:24:36.045580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.621 [2024-11-26 20:24:36.129660] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.621 [2024-11-26 20:24:36.129710] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.191 [2024-11-26 20:24:36.708838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.191 [2024-11-26 20:24:36.708909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.191 [2024-11-26 20:24:36.708923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.191 [2024-11-26 20:24:36.708934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.191 [2024-11-26 20:24:36.708943] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.191 [2024-11-26 20:24:36.708958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.191 [2024-11-26 20:24:36.708966] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.191 [2024-11-26 20:24:36.708976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.191 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.450 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.450 "name": "Existed_Raid", 00:11:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.450 "strip_size_kb": 0, 00:11:43.450 "state": "configuring", 00:11:43.450 "raid_level": "raid1", 00:11:43.450 "superblock": false, 00:11:43.450 "num_base_bdevs": 4, 00:11:43.450 "num_base_bdevs_discovered": 0, 00:11:43.450 "num_base_bdevs_operational": 4, 00:11:43.450 "base_bdevs_list": [ 00:11:43.450 { 00:11:43.450 "name": "BaseBdev1", 00:11:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.450 "is_configured": false, 00:11:43.450 "data_offset": 0, 00:11:43.450 "data_size": 0 00:11:43.450 }, 00:11:43.450 { 00:11:43.450 "name": "BaseBdev2", 00:11:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.450 "is_configured": false, 00:11:43.450 "data_offset": 0, 00:11:43.450 "data_size": 0 00:11:43.450 }, 00:11:43.450 { 00:11:43.450 "name": "BaseBdev3", 00:11:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.450 "is_configured": false, 00:11:43.450 "data_offset": 0, 00:11:43.450 "data_size": 0 00:11:43.450 }, 00:11:43.450 { 00:11:43.450 "name": "BaseBdev4", 00:11:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.450 "is_configured": false, 00:11:43.450 "data_offset": 0, 00:11:43.450 "data_size": 0 00:11:43.450 } 00:11:43.450 ] 00:11:43.450 }' 00:11:43.450 20:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.450 20:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.709 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.709 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 [2024-11-26 20:24:37.192771] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.709 [2024-11-26 20:24:37.192904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:43.709 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.709 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:43.709 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.709 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 [2024-11-26 20:24:37.204855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.709 [2024-11-26 20:24:37.204997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.710 [2024-11-26 20:24:37.205013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.710 [2024-11-26 20:24:37.205025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.710 [2024-11-26 20:24:37.205033] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.710 [2024-11-26 20:24:37.205043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.710 [2024-11-26 20:24:37.205051] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.710 [2024-11-26 20:24:37.205061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 [2024-11-26 20:24:37.232947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.710 BaseBdev1 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.710 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.999 [ 00:11:43.999 { 00:11:43.999 "name": "BaseBdev1", 00:11:43.999 "aliases": [ 00:11:43.999 "a94f307c-21bc-43aa-97b6-c9b19a77f87d" 00:11:43.999 ], 00:11:43.999 "product_name": "Malloc disk", 00:11:43.999 "block_size": 512, 00:11:43.999 "num_blocks": 65536, 00:11:43.999 "uuid": "a94f307c-21bc-43aa-97b6-c9b19a77f87d", 00:11:43.999 "assigned_rate_limits": { 00:11:43.999 "rw_ios_per_sec": 0, 00:11:43.999 "rw_mbytes_per_sec": 0, 00:11:43.999 "r_mbytes_per_sec": 0, 00:11:43.999 "w_mbytes_per_sec": 0 00:11:43.999 }, 00:11:43.999 "claimed": true, 00:11:43.999 "claim_type": "exclusive_write", 00:11:43.999 "zoned": false, 00:11:43.999 "supported_io_types": { 00:11:43.999 "read": true, 00:11:43.999 "write": true, 00:11:43.999 "unmap": true, 00:11:43.999 "flush": true, 00:11:43.999 "reset": true, 00:11:43.999 "nvme_admin": false, 00:11:43.999 "nvme_io": false, 00:11:43.999 "nvme_io_md": false, 00:11:43.999 "write_zeroes": true, 00:11:43.999 "zcopy": true, 00:11:43.999 "get_zone_info": false, 00:11:43.999 "zone_management": false, 00:11:43.999 "zone_append": false, 00:11:43.999 "compare": false, 00:11:43.999 "compare_and_write": false, 00:11:43.999 "abort": true, 00:11:43.999 "seek_hole": false, 00:11:43.999 "seek_data": false, 00:11:43.999 "copy": true, 00:11:43.999 "nvme_iov_md": false 00:11:43.999 }, 00:11:43.999 "memory_domains": [ 00:11:43.999 { 00:11:43.999 "dma_device_id": "system", 00:11:43.999 "dma_device_type": 1 00:11:43.999 }, 00:11:43.999 { 00:11:43.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.999 "dma_device_type": 2 00:11:43.999 } 00:11:43.999 ], 00:11:43.999 "driver_specific": {} 00:11:43.999 } 00:11:43.999 ] 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.999 "name": "Existed_Raid", 00:11:43.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.999 "strip_size_kb": 0, 00:11:43.999 "state": "configuring", 00:11:43.999 "raid_level": "raid1", 00:11:43.999 "superblock": false, 00:11:43.999 "num_base_bdevs": 4, 00:11:43.999 "num_base_bdevs_discovered": 1, 00:11:43.999 "num_base_bdevs_operational": 4, 00:11:43.999 "base_bdevs_list": [ 00:11:43.999 { 00:11:43.999 "name": "BaseBdev1", 00:11:43.999 "uuid": "a94f307c-21bc-43aa-97b6-c9b19a77f87d", 00:11:43.999 "is_configured": true, 00:11:43.999 "data_offset": 0, 00:11:43.999 "data_size": 65536 00:11:43.999 }, 00:11:43.999 { 00:11:43.999 "name": "BaseBdev2", 00:11:43.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.999 "is_configured": false, 00:11:43.999 "data_offset": 0, 00:11:43.999 "data_size": 0 00:11:43.999 }, 00:11:43.999 { 00:11:43.999 "name": "BaseBdev3", 00:11:43.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.999 "is_configured": false, 00:11:43.999 "data_offset": 0, 00:11:43.999 "data_size": 0 00:11:43.999 }, 00:11:43.999 { 00:11:43.999 "name": "BaseBdev4", 00:11:43.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.999 "is_configured": false, 00:11:43.999 "data_offset": 0, 00:11:43.999 "data_size": 0 00:11:43.999 } 00:11:43.999 ] 00:11:43.999 }' 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.999 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.259 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.260 [2024-11-26 20:24:37.764785] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.260 [2024-11-26 20:24:37.764937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.260 [2024-11-26 20:24:37.776870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.260 [2024-11-26 20:24:37.779134] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.260 [2024-11-26 20:24:37.779236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.260 [2024-11-26 20:24:37.779291] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.260 [2024-11-26 20:24:37.779320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.260 [2024-11-26 20:24:37.779355] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:44.260 [2024-11-26 20:24:37.779381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.260 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.520 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.520 "name": "Existed_Raid", 00:11:44.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.520 "strip_size_kb": 0, 00:11:44.520 "state": "configuring", 00:11:44.520 "raid_level": "raid1", 00:11:44.520 "superblock": false, 00:11:44.520 "num_base_bdevs": 4, 00:11:44.520 "num_base_bdevs_discovered": 1, 00:11:44.520 "num_base_bdevs_operational": 4, 00:11:44.520 "base_bdevs_list": [ 00:11:44.520 { 00:11:44.520 "name": "BaseBdev1", 00:11:44.520 "uuid": "a94f307c-21bc-43aa-97b6-c9b19a77f87d", 00:11:44.520 "is_configured": true, 00:11:44.520 "data_offset": 0, 00:11:44.520 "data_size": 65536 00:11:44.520 }, 00:11:44.520 { 00:11:44.520 "name": "BaseBdev2", 00:11:44.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.520 "is_configured": false, 00:11:44.520 "data_offset": 0, 00:11:44.520 "data_size": 0 00:11:44.520 }, 00:11:44.520 { 00:11:44.520 "name": "BaseBdev3", 00:11:44.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.520 "is_configured": false, 00:11:44.520 "data_offset": 0, 00:11:44.520 "data_size": 0 00:11:44.520 }, 00:11:44.520 { 00:11:44.520 "name": "BaseBdev4", 00:11:44.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.520 "is_configured": false, 00:11:44.520 "data_offset": 0, 00:11:44.520 "data_size": 0 00:11:44.520 } 00:11:44.520 ] 00:11:44.520 }' 00:11:44.520 20:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.520 20:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 [2024-11-26 20:24:38.275360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.779 BaseBdev2 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.779 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 [ 00:11:44.779 { 00:11:44.779 "name": "BaseBdev2", 00:11:44.779 "aliases": [ 00:11:44.779 "bfcf7a1e-0927-43e7-85aa-07d2e84fdc9c" 00:11:44.779 ], 00:11:44.779 "product_name": "Malloc disk", 00:11:44.779 "block_size": 512, 00:11:44.779 "num_blocks": 65536, 00:11:44.779 "uuid": "bfcf7a1e-0927-43e7-85aa-07d2e84fdc9c", 00:11:44.779 "assigned_rate_limits": { 00:11:44.779 "rw_ios_per_sec": 0, 00:11:44.779 "rw_mbytes_per_sec": 0, 00:11:44.779 "r_mbytes_per_sec": 0, 00:11:44.779 "w_mbytes_per_sec": 0 00:11:44.779 }, 00:11:44.779 "claimed": true, 00:11:44.780 "claim_type": "exclusive_write", 00:11:44.780 "zoned": false, 00:11:44.780 "supported_io_types": { 00:11:44.780 "read": true, 00:11:44.780 "write": true, 00:11:44.780 "unmap": true, 00:11:44.780 "flush": true, 00:11:44.780 "reset": true, 00:11:44.780 "nvme_admin": false, 00:11:44.780 "nvme_io": false, 00:11:44.780 "nvme_io_md": false, 00:11:44.780 "write_zeroes": true, 00:11:44.780 "zcopy": true, 00:11:44.780 "get_zone_info": false, 00:11:44.780 "zone_management": false, 00:11:44.780 "zone_append": false, 00:11:44.780 "compare": false, 00:11:44.780 "compare_and_write": false, 00:11:44.780 "abort": true, 00:11:44.780 "seek_hole": false, 00:11:44.780 "seek_data": false, 00:11:44.780 "copy": true, 00:11:44.780 "nvme_iov_md": false 00:11:44.780 }, 00:11:44.780 "memory_domains": [ 00:11:44.780 { 00:11:44.780 "dma_device_id": "system", 00:11:44.780 "dma_device_type": 1 00:11:44.780 }, 00:11:44.780 { 00:11:44.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.780 "dma_device_type": 2 00:11:44.780 } 00:11:44.780 ], 00:11:44.780 "driver_specific": {} 00:11:44.780 } 00:11:44.780 ] 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.780 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.038 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.038 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.038 "name": "Existed_Raid", 00:11:45.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.038 "strip_size_kb": 0, 00:11:45.038 "state": "configuring", 00:11:45.038 "raid_level": "raid1", 00:11:45.038 "superblock": false, 00:11:45.038 "num_base_bdevs": 4, 00:11:45.038 "num_base_bdevs_discovered": 2, 00:11:45.038 "num_base_bdevs_operational": 4, 00:11:45.038 "base_bdevs_list": [ 00:11:45.038 { 00:11:45.039 "name": "BaseBdev1", 00:11:45.039 "uuid": "a94f307c-21bc-43aa-97b6-c9b19a77f87d", 00:11:45.039 "is_configured": true, 00:11:45.039 "data_offset": 0, 00:11:45.039 "data_size": 65536 00:11:45.039 }, 00:11:45.039 { 00:11:45.039 "name": "BaseBdev2", 00:11:45.039 "uuid": "bfcf7a1e-0927-43e7-85aa-07d2e84fdc9c", 00:11:45.039 "is_configured": true, 00:11:45.039 "data_offset": 0, 00:11:45.039 "data_size": 65536 00:11:45.039 }, 00:11:45.039 { 00:11:45.039 "name": "BaseBdev3", 00:11:45.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.039 "is_configured": false, 00:11:45.039 "data_offset": 0, 00:11:45.039 "data_size": 0 00:11:45.039 }, 00:11:45.039 { 00:11:45.039 "name": "BaseBdev4", 00:11:45.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.039 "is_configured": false, 00:11:45.039 "data_offset": 0, 00:11:45.039 "data_size": 0 00:11:45.039 } 00:11:45.039 ] 00:11:45.039 }' 00:11:45.039 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.039 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.297 [2024-11-26 20:24:38.832738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.297 BaseBdev3 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.297 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.556 [ 00:11:45.556 { 00:11:45.556 "name": "BaseBdev3", 00:11:45.556 "aliases": [ 00:11:45.556 "f4f0b972-f519-427a-b52a-5db02181e985" 00:11:45.557 ], 00:11:45.557 "product_name": "Malloc disk", 00:11:45.557 "block_size": 512, 00:11:45.557 "num_blocks": 65536, 00:11:45.557 "uuid": "f4f0b972-f519-427a-b52a-5db02181e985", 00:11:45.557 "assigned_rate_limits": { 00:11:45.557 "rw_ios_per_sec": 0, 00:11:45.557 "rw_mbytes_per_sec": 0, 00:11:45.557 "r_mbytes_per_sec": 0, 00:11:45.557 "w_mbytes_per_sec": 0 00:11:45.557 }, 00:11:45.557 "claimed": true, 00:11:45.557 "claim_type": "exclusive_write", 00:11:45.557 "zoned": false, 00:11:45.557 "supported_io_types": { 00:11:45.557 "read": true, 00:11:45.557 "write": true, 00:11:45.557 "unmap": true, 00:11:45.557 "flush": true, 00:11:45.557 "reset": true, 00:11:45.557 "nvme_admin": false, 00:11:45.557 "nvme_io": false, 00:11:45.557 "nvme_io_md": false, 00:11:45.557 "write_zeroes": true, 00:11:45.557 "zcopy": true, 00:11:45.557 "get_zone_info": false, 00:11:45.557 "zone_management": false, 00:11:45.557 "zone_append": false, 00:11:45.557 "compare": false, 00:11:45.557 "compare_and_write": false, 00:11:45.557 "abort": true, 00:11:45.557 "seek_hole": false, 00:11:45.557 "seek_data": false, 00:11:45.557 "copy": true, 00:11:45.557 "nvme_iov_md": false 00:11:45.557 }, 00:11:45.557 "memory_domains": [ 00:11:45.557 { 00:11:45.557 "dma_device_id": "system", 00:11:45.557 "dma_device_type": 1 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.557 "dma_device_type": 2 00:11:45.557 } 00:11:45.557 ], 00:11:45.557 "driver_specific": {} 00:11:45.557 } 00:11:45.557 ] 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.557 "name": "Existed_Raid", 00:11:45.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.557 "strip_size_kb": 0, 00:11:45.557 "state": "configuring", 00:11:45.557 "raid_level": "raid1", 00:11:45.557 "superblock": false, 00:11:45.557 "num_base_bdevs": 4, 00:11:45.557 "num_base_bdevs_discovered": 3, 00:11:45.557 "num_base_bdevs_operational": 4, 00:11:45.557 "base_bdevs_list": [ 00:11:45.557 { 00:11:45.557 "name": "BaseBdev1", 00:11:45.557 "uuid": "a94f307c-21bc-43aa-97b6-c9b19a77f87d", 00:11:45.557 "is_configured": true, 00:11:45.557 "data_offset": 0, 00:11:45.557 "data_size": 65536 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "name": "BaseBdev2", 00:11:45.557 "uuid": "bfcf7a1e-0927-43e7-85aa-07d2e84fdc9c", 00:11:45.557 "is_configured": true, 00:11:45.557 "data_offset": 0, 00:11:45.557 "data_size": 65536 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "name": "BaseBdev3", 00:11:45.557 "uuid": "f4f0b972-f519-427a-b52a-5db02181e985", 00:11:45.557 "is_configured": true, 00:11:45.557 "data_offset": 0, 00:11:45.557 "data_size": 65536 00:11:45.557 }, 00:11:45.557 { 00:11:45.557 "name": "BaseBdev4", 00:11:45.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.557 "is_configured": false, 00:11:45.557 "data_offset": 0, 00:11:45.557 "data_size": 0 00:11:45.557 } 00:11:45.557 ] 00:11:45.557 }' 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.557 20:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.816 [2024-11-26 20:24:39.354738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.816 [2024-11-26 20:24:39.354797] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:45.816 [2024-11-26 20:24:39.354806] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:45.816 [2024-11-26 20:24:39.355146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:45.816 [2024-11-26 20:24:39.355300] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:45.816 [2024-11-26 20:24:39.355321] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:45.816 [2024-11-26 20:24:39.355565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.816 BaseBdev4 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.816 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.075 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.075 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:46.075 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.075 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.075 [ 00:11:46.075 { 00:11:46.075 "name": "BaseBdev4", 00:11:46.075 "aliases": [ 00:11:46.075 "cded65a1-d0af-4a62-822f-e71313632f8c" 00:11:46.075 ], 00:11:46.075 "product_name": "Malloc disk", 00:11:46.075 "block_size": 512, 00:11:46.075 "num_blocks": 65536, 00:11:46.075 "uuid": "cded65a1-d0af-4a62-822f-e71313632f8c", 00:11:46.075 "assigned_rate_limits": { 00:11:46.075 "rw_ios_per_sec": 0, 00:11:46.075 "rw_mbytes_per_sec": 0, 00:11:46.075 "r_mbytes_per_sec": 0, 00:11:46.075 "w_mbytes_per_sec": 0 00:11:46.075 }, 00:11:46.075 "claimed": true, 00:11:46.075 "claim_type": "exclusive_write", 00:11:46.075 "zoned": false, 00:11:46.075 "supported_io_types": { 00:11:46.075 "read": true, 00:11:46.075 "write": true, 00:11:46.075 "unmap": true, 00:11:46.075 "flush": true, 00:11:46.075 "reset": true, 00:11:46.075 "nvme_admin": false, 00:11:46.075 "nvme_io": false, 00:11:46.075 "nvme_io_md": false, 00:11:46.075 "write_zeroes": true, 00:11:46.075 "zcopy": true, 00:11:46.075 "get_zone_info": false, 00:11:46.075 "zone_management": false, 00:11:46.075 "zone_append": false, 00:11:46.075 "compare": false, 00:11:46.075 "compare_and_write": false, 00:11:46.075 "abort": true, 00:11:46.076 "seek_hole": false, 00:11:46.076 "seek_data": false, 00:11:46.076 "copy": true, 00:11:46.076 "nvme_iov_md": false 00:11:46.076 }, 00:11:46.076 "memory_domains": [ 00:11:46.076 { 00:11:46.076 "dma_device_id": "system", 00:11:46.076 "dma_device_type": 1 00:11:46.076 }, 00:11:46.076 { 00:11:46.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.076 "dma_device_type": 2 00:11:46.076 } 00:11:46.076 ], 00:11:46.076 "driver_specific": {} 00:11:46.076 } 00:11:46.076 ] 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.076 "name": "Existed_Raid", 00:11:46.076 "uuid": "9bc5375f-eb91-4929-98c5-03bf1d2bf66f", 00:11:46.076 "strip_size_kb": 0, 00:11:46.076 "state": "online", 00:11:46.076 "raid_level": "raid1", 00:11:46.076 "superblock": false, 00:11:46.076 "num_base_bdevs": 4, 00:11:46.076 "num_base_bdevs_discovered": 4, 00:11:46.076 "num_base_bdevs_operational": 4, 00:11:46.076 "base_bdevs_list": [ 00:11:46.076 { 00:11:46.076 "name": "BaseBdev1", 00:11:46.076 "uuid": "a94f307c-21bc-43aa-97b6-c9b19a77f87d", 00:11:46.076 "is_configured": true, 00:11:46.076 "data_offset": 0, 00:11:46.076 "data_size": 65536 00:11:46.076 }, 00:11:46.076 { 00:11:46.076 "name": "BaseBdev2", 00:11:46.076 "uuid": "bfcf7a1e-0927-43e7-85aa-07d2e84fdc9c", 00:11:46.076 "is_configured": true, 00:11:46.076 "data_offset": 0, 00:11:46.076 "data_size": 65536 00:11:46.076 }, 00:11:46.076 { 00:11:46.076 "name": "BaseBdev3", 00:11:46.076 "uuid": "f4f0b972-f519-427a-b52a-5db02181e985", 00:11:46.076 "is_configured": true, 00:11:46.076 "data_offset": 0, 00:11:46.076 "data_size": 65536 00:11:46.076 }, 00:11:46.076 { 00:11:46.076 "name": "BaseBdev4", 00:11:46.076 "uuid": "cded65a1-d0af-4a62-822f-e71313632f8c", 00:11:46.076 "is_configured": true, 00:11:46.076 "data_offset": 0, 00:11:46.076 "data_size": 65536 00:11:46.076 } 00:11:46.076 ] 00:11:46.076 }' 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.076 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.335 [2024-11-26 20:24:39.858403] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.335 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.335 "name": "Existed_Raid", 00:11:46.335 "aliases": [ 00:11:46.335 "9bc5375f-eb91-4929-98c5-03bf1d2bf66f" 00:11:46.335 ], 00:11:46.335 "product_name": "Raid Volume", 00:11:46.335 "block_size": 512, 00:11:46.335 "num_blocks": 65536, 00:11:46.335 "uuid": "9bc5375f-eb91-4929-98c5-03bf1d2bf66f", 00:11:46.335 "assigned_rate_limits": { 00:11:46.335 "rw_ios_per_sec": 0, 00:11:46.335 "rw_mbytes_per_sec": 0, 00:11:46.335 "r_mbytes_per_sec": 0, 00:11:46.335 "w_mbytes_per_sec": 0 00:11:46.335 }, 00:11:46.335 "claimed": false, 00:11:46.335 "zoned": false, 00:11:46.335 "supported_io_types": { 00:11:46.335 "read": true, 00:11:46.335 "write": true, 00:11:46.335 "unmap": false, 00:11:46.335 "flush": false, 00:11:46.335 "reset": true, 00:11:46.335 "nvme_admin": false, 00:11:46.335 "nvme_io": false, 00:11:46.335 "nvme_io_md": false, 00:11:46.335 "write_zeroes": true, 00:11:46.335 "zcopy": false, 00:11:46.335 "get_zone_info": false, 00:11:46.335 "zone_management": false, 00:11:46.335 "zone_append": false, 00:11:46.335 "compare": false, 00:11:46.335 "compare_and_write": false, 00:11:46.335 "abort": false, 00:11:46.335 "seek_hole": false, 00:11:46.335 "seek_data": false, 00:11:46.335 "copy": false, 00:11:46.335 "nvme_iov_md": false 00:11:46.335 }, 00:11:46.335 "memory_domains": [ 00:11:46.335 { 00:11:46.335 "dma_device_id": "system", 00:11:46.335 "dma_device_type": 1 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.335 "dma_device_type": 2 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "dma_device_id": "system", 00:11:46.335 "dma_device_type": 1 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.335 "dma_device_type": 2 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "dma_device_id": "system", 00:11:46.335 "dma_device_type": 1 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.335 "dma_device_type": 2 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "dma_device_id": "system", 00:11:46.335 "dma_device_type": 1 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.335 "dma_device_type": 2 00:11:46.335 } 00:11:46.335 ], 00:11:46.335 "driver_specific": { 00:11:46.335 "raid": { 00:11:46.335 "uuid": "9bc5375f-eb91-4929-98c5-03bf1d2bf66f", 00:11:46.335 "strip_size_kb": 0, 00:11:46.335 "state": "online", 00:11:46.335 "raid_level": "raid1", 00:11:46.335 "superblock": false, 00:11:46.335 "num_base_bdevs": 4, 00:11:46.335 "num_base_bdevs_discovered": 4, 00:11:46.335 "num_base_bdevs_operational": 4, 00:11:46.335 "base_bdevs_list": [ 00:11:46.335 { 00:11:46.335 "name": "BaseBdev1", 00:11:46.335 "uuid": "a94f307c-21bc-43aa-97b6-c9b19a77f87d", 00:11:46.335 "is_configured": true, 00:11:46.335 "data_offset": 0, 00:11:46.335 "data_size": 65536 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "name": "BaseBdev2", 00:11:46.335 "uuid": "bfcf7a1e-0927-43e7-85aa-07d2e84fdc9c", 00:11:46.335 "is_configured": true, 00:11:46.335 "data_offset": 0, 00:11:46.335 "data_size": 65536 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "name": "BaseBdev3", 00:11:46.335 "uuid": "f4f0b972-f519-427a-b52a-5db02181e985", 00:11:46.335 "is_configured": true, 00:11:46.335 "data_offset": 0, 00:11:46.335 "data_size": 65536 00:11:46.335 }, 00:11:46.335 { 00:11:46.335 "name": "BaseBdev4", 00:11:46.335 "uuid": "cded65a1-d0af-4a62-822f-e71313632f8c", 00:11:46.335 "is_configured": true, 00:11:46.335 "data_offset": 0, 00:11:46.335 "data_size": 65536 00:11:46.335 } 00:11:46.335 ] 00:11:46.336 } 00:11:46.336 } 00:11:46.336 }' 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:46.595 BaseBdev2 00:11:46.595 BaseBdev3 00:11:46.595 BaseBdev4' 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.595 20:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.595 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.855 [2024-11-26 20:24:40.201666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.855 "name": "Existed_Raid", 00:11:46.855 "uuid": "9bc5375f-eb91-4929-98c5-03bf1d2bf66f", 00:11:46.855 "strip_size_kb": 0, 00:11:46.855 "state": "online", 00:11:46.855 "raid_level": "raid1", 00:11:46.855 "superblock": false, 00:11:46.855 "num_base_bdevs": 4, 00:11:46.855 "num_base_bdevs_discovered": 3, 00:11:46.855 "num_base_bdevs_operational": 3, 00:11:46.855 "base_bdevs_list": [ 00:11:46.855 { 00:11:46.855 "name": null, 00:11:46.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.855 "is_configured": false, 00:11:46.855 "data_offset": 0, 00:11:46.855 "data_size": 65536 00:11:46.855 }, 00:11:46.855 { 00:11:46.855 "name": "BaseBdev2", 00:11:46.855 "uuid": "bfcf7a1e-0927-43e7-85aa-07d2e84fdc9c", 00:11:46.855 "is_configured": true, 00:11:46.855 "data_offset": 0, 00:11:46.855 "data_size": 65536 00:11:46.855 }, 00:11:46.855 { 00:11:46.855 "name": "BaseBdev3", 00:11:46.855 "uuid": "f4f0b972-f519-427a-b52a-5db02181e985", 00:11:46.855 "is_configured": true, 00:11:46.855 "data_offset": 0, 00:11:46.855 "data_size": 65536 00:11:46.855 }, 00:11:46.855 { 00:11:46.855 "name": "BaseBdev4", 00:11:46.855 "uuid": "cded65a1-d0af-4a62-822f-e71313632f8c", 00:11:46.855 "is_configured": true, 00:11:46.855 "data_offset": 0, 00:11:46.855 "data_size": 65536 00:11:46.855 } 00:11:46.855 ] 00:11:46.855 }' 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.855 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.113 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:47.113 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.113 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.113 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.113 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.113 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.113 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.375 [2024-11-26 20:24:40.682439] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.375 [2024-11-26 20:24:40.752190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.375 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.375 [2024-11-26 20:24:40.813204] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:47.375 [2024-11-26 20:24:40.813305] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.376 [2024-11-26 20:24:40.829147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.376 [2024-11-26 20:24:40.829273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.376 [2024-11-26 20:24:40.829328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.376 BaseBdev2 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.376 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.640 [ 00:11:47.640 { 00:11:47.640 "name": "BaseBdev2", 00:11:47.640 "aliases": [ 00:11:47.640 "a4a07ffd-cd53-4f80-b48a-9c550a7893ef" 00:11:47.640 ], 00:11:47.640 "product_name": "Malloc disk", 00:11:47.640 "block_size": 512, 00:11:47.640 "num_blocks": 65536, 00:11:47.640 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:47.640 "assigned_rate_limits": { 00:11:47.640 "rw_ios_per_sec": 0, 00:11:47.640 "rw_mbytes_per_sec": 0, 00:11:47.640 "r_mbytes_per_sec": 0, 00:11:47.640 "w_mbytes_per_sec": 0 00:11:47.640 }, 00:11:47.640 "claimed": false, 00:11:47.640 "zoned": false, 00:11:47.640 "supported_io_types": { 00:11:47.640 "read": true, 00:11:47.640 "write": true, 00:11:47.640 "unmap": true, 00:11:47.640 "flush": true, 00:11:47.640 "reset": true, 00:11:47.640 "nvme_admin": false, 00:11:47.640 "nvme_io": false, 00:11:47.640 "nvme_io_md": false, 00:11:47.640 "write_zeroes": true, 00:11:47.640 "zcopy": true, 00:11:47.640 "get_zone_info": false, 00:11:47.640 "zone_management": false, 00:11:47.640 "zone_append": false, 00:11:47.640 "compare": false, 00:11:47.640 "compare_and_write": false, 00:11:47.640 "abort": true, 00:11:47.640 "seek_hole": false, 00:11:47.640 "seek_data": false, 00:11:47.640 "copy": true, 00:11:47.640 "nvme_iov_md": false 00:11:47.640 }, 00:11:47.640 "memory_domains": [ 00:11:47.640 { 00:11:47.640 "dma_device_id": "system", 00:11:47.640 "dma_device_type": 1 00:11:47.640 }, 00:11:47.640 { 00:11:47.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.640 "dma_device_type": 2 00:11:47.640 } 00:11:47.640 ], 00:11:47.640 "driver_specific": {} 00:11:47.640 } 00:11:47.640 ] 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.640 BaseBdev3 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.640 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.640 [ 00:11:47.640 { 00:11:47.640 "name": "BaseBdev3", 00:11:47.640 "aliases": [ 00:11:47.640 "28ce5202-0c28-4018-99de-4b710bd96dd9" 00:11:47.640 ], 00:11:47.640 "product_name": "Malloc disk", 00:11:47.640 "block_size": 512, 00:11:47.640 "num_blocks": 65536, 00:11:47.640 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:47.640 "assigned_rate_limits": { 00:11:47.640 "rw_ios_per_sec": 0, 00:11:47.640 "rw_mbytes_per_sec": 0, 00:11:47.640 "r_mbytes_per_sec": 0, 00:11:47.640 "w_mbytes_per_sec": 0 00:11:47.640 }, 00:11:47.640 "claimed": false, 00:11:47.640 "zoned": false, 00:11:47.640 "supported_io_types": { 00:11:47.640 "read": true, 00:11:47.640 "write": true, 00:11:47.640 "unmap": true, 00:11:47.640 "flush": true, 00:11:47.640 "reset": true, 00:11:47.640 "nvme_admin": false, 00:11:47.640 "nvme_io": false, 00:11:47.640 "nvme_io_md": false, 00:11:47.640 "write_zeroes": true, 00:11:47.640 "zcopy": true, 00:11:47.640 "get_zone_info": false, 00:11:47.640 "zone_management": false, 00:11:47.640 "zone_append": false, 00:11:47.640 "compare": false, 00:11:47.640 "compare_and_write": false, 00:11:47.640 "abort": true, 00:11:47.640 "seek_hole": false, 00:11:47.640 "seek_data": false, 00:11:47.640 "copy": true, 00:11:47.640 "nvme_iov_md": false 00:11:47.640 }, 00:11:47.640 "memory_domains": [ 00:11:47.640 { 00:11:47.640 "dma_device_id": "system", 00:11:47.640 "dma_device_type": 1 00:11:47.640 }, 00:11:47.640 { 00:11:47.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.640 "dma_device_type": 2 00:11:47.641 } 00:11:47.641 ], 00:11:47.641 "driver_specific": {} 00:11:47.641 } 00:11:47.641 ] 00:11:47.641 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.641 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.641 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.641 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.641 20:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:47.641 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.641 20:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.641 BaseBdev4 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.641 [ 00:11:47.641 { 00:11:47.641 "name": "BaseBdev4", 00:11:47.641 "aliases": [ 00:11:47.641 "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f" 00:11:47.641 ], 00:11:47.641 "product_name": "Malloc disk", 00:11:47.641 "block_size": 512, 00:11:47.641 "num_blocks": 65536, 00:11:47.641 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:47.641 "assigned_rate_limits": { 00:11:47.641 "rw_ios_per_sec": 0, 00:11:47.641 "rw_mbytes_per_sec": 0, 00:11:47.641 "r_mbytes_per_sec": 0, 00:11:47.641 "w_mbytes_per_sec": 0 00:11:47.641 }, 00:11:47.641 "claimed": false, 00:11:47.641 "zoned": false, 00:11:47.641 "supported_io_types": { 00:11:47.641 "read": true, 00:11:47.641 "write": true, 00:11:47.641 "unmap": true, 00:11:47.641 "flush": true, 00:11:47.641 "reset": true, 00:11:47.641 "nvme_admin": false, 00:11:47.641 "nvme_io": false, 00:11:47.641 "nvme_io_md": false, 00:11:47.641 "write_zeroes": true, 00:11:47.641 "zcopy": true, 00:11:47.641 "get_zone_info": false, 00:11:47.641 "zone_management": false, 00:11:47.641 "zone_append": false, 00:11:47.641 "compare": false, 00:11:47.641 "compare_and_write": false, 00:11:47.641 "abort": true, 00:11:47.641 "seek_hole": false, 00:11:47.641 "seek_data": false, 00:11:47.641 "copy": true, 00:11:47.641 "nvme_iov_md": false 00:11:47.641 }, 00:11:47.641 "memory_domains": [ 00:11:47.641 { 00:11:47.641 "dma_device_id": "system", 00:11:47.641 "dma_device_type": 1 00:11:47.641 }, 00:11:47.641 { 00:11:47.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.641 "dma_device_type": 2 00:11:47.641 } 00:11:47.641 ], 00:11:47.641 "driver_specific": {} 00:11:47.641 } 00:11:47.641 ] 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.641 [2024-11-26 20:24:41.070403] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.641 [2024-11-26 20:24:41.070534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.641 [2024-11-26 20:24:41.070584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.641 [2024-11-26 20:24:41.072525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.641 [2024-11-26 20:24:41.072652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.641 "name": "Existed_Raid", 00:11:47.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.641 "strip_size_kb": 0, 00:11:47.641 "state": "configuring", 00:11:47.641 "raid_level": "raid1", 00:11:47.641 "superblock": false, 00:11:47.641 "num_base_bdevs": 4, 00:11:47.641 "num_base_bdevs_discovered": 3, 00:11:47.641 "num_base_bdevs_operational": 4, 00:11:47.641 "base_bdevs_list": [ 00:11:47.641 { 00:11:47.641 "name": "BaseBdev1", 00:11:47.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.641 "is_configured": false, 00:11:47.641 "data_offset": 0, 00:11:47.641 "data_size": 0 00:11:47.641 }, 00:11:47.641 { 00:11:47.641 "name": "BaseBdev2", 00:11:47.641 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:47.641 "is_configured": true, 00:11:47.641 "data_offset": 0, 00:11:47.641 "data_size": 65536 00:11:47.641 }, 00:11:47.641 { 00:11:47.641 "name": "BaseBdev3", 00:11:47.641 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:47.641 "is_configured": true, 00:11:47.641 "data_offset": 0, 00:11:47.641 "data_size": 65536 00:11:47.641 }, 00:11:47.641 { 00:11:47.641 "name": "BaseBdev4", 00:11:47.641 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:47.641 "is_configured": true, 00:11:47.641 "data_offset": 0, 00:11:47.641 "data_size": 65536 00:11:47.641 } 00:11:47.641 ] 00:11:47.641 }' 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.641 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.210 [2024-11-26 20:24:41.465777] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.210 "name": "Existed_Raid", 00:11:48.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.210 "strip_size_kb": 0, 00:11:48.210 "state": "configuring", 00:11:48.210 "raid_level": "raid1", 00:11:48.210 "superblock": false, 00:11:48.210 "num_base_bdevs": 4, 00:11:48.210 "num_base_bdevs_discovered": 2, 00:11:48.210 "num_base_bdevs_operational": 4, 00:11:48.210 "base_bdevs_list": [ 00:11:48.210 { 00:11:48.210 "name": "BaseBdev1", 00:11:48.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.210 "is_configured": false, 00:11:48.210 "data_offset": 0, 00:11:48.210 "data_size": 0 00:11:48.210 }, 00:11:48.210 { 00:11:48.210 "name": null, 00:11:48.210 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:48.210 "is_configured": false, 00:11:48.210 "data_offset": 0, 00:11:48.210 "data_size": 65536 00:11:48.210 }, 00:11:48.210 { 00:11:48.210 "name": "BaseBdev3", 00:11:48.210 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:48.210 "is_configured": true, 00:11:48.210 "data_offset": 0, 00:11:48.210 "data_size": 65536 00:11:48.210 }, 00:11:48.210 { 00:11:48.210 "name": "BaseBdev4", 00:11:48.210 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:48.210 "is_configured": true, 00:11:48.210 "data_offset": 0, 00:11:48.210 "data_size": 65536 00:11:48.210 } 00:11:48.210 ] 00:11:48.210 }' 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.210 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.470 20:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.470 [2024-11-26 20:24:42.010558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.470 BaseBdev1 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.471 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 [ 00:11:48.731 { 00:11:48.731 "name": "BaseBdev1", 00:11:48.731 "aliases": [ 00:11:48.731 "7f2728d6-01a8-448a-b94e-437d4b26b156" 00:11:48.731 ], 00:11:48.731 "product_name": "Malloc disk", 00:11:48.731 "block_size": 512, 00:11:48.731 "num_blocks": 65536, 00:11:48.731 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:48.731 "assigned_rate_limits": { 00:11:48.731 "rw_ios_per_sec": 0, 00:11:48.731 "rw_mbytes_per_sec": 0, 00:11:48.731 "r_mbytes_per_sec": 0, 00:11:48.731 "w_mbytes_per_sec": 0 00:11:48.731 }, 00:11:48.731 "claimed": true, 00:11:48.731 "claim_type": "exclusive_write", 00:11:48.731 "zoned": false, 00:11:48.731 "supported_io_types": { 00:11:48.731 "read": true, 00:11:48.731 "write": true, 00:11:48.731 "unmap": true, 00:11:48.731 "flush": true, 00:11:48.731 "reset": true, 00:11:48.731 "nvme_admin": false, 00:11:48.731 "nvme_io": false, 00:11:48.731 "nvme_io_md": false, 00:11:48.731 "write_zeroes": true, 00:11:48.731 "zcopy": true, 00:11:48.731 "get_zone_info": false, 00:11:48.731 "zone_management": false, 00:11:48.731 "zone_append": false, 00:11:48.731 "compare": false, 00:11:48.731 "compare_and_write": false, 00:11:48.731 "abort": true, 00:11:48.731 "seek_hole": false, 00:11:48.731 "seek_data": false, 00:11:48.731 "copy": true, 00:11:48.731 "nvme_iov_md": false 00:11:48.731 }, 00:11:48.731 "memory_domains": [ 00:11:48.731 { 00:11:48.731 "dma_device_id": "system", 00:11:48.731 "dma_device_type": 1 00:11:48.731 }, 00:11:48.731 { 00:11:48.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.731 "dma_device_type": 2 00:11:48.731 } 00:11:48.731 ], 00:11:48.731 "driver_specific": {} 00:11:48.731 } 00:11:48.731 ] 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.731 "name": "Existed_Raid", 00:11:48.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.731 "strip_size_kb": 0, 00:11:48.731 "state": "configuring", 00:11:48.731 "raid_level": "raid1", 00:11:48.731 "superblock": false, 00:11:48.731 "num_base_bdevs": 4, 00:11:48.731 "num_base_bdevs_discovered": 3, 00:11:48.731 "num_base_bdevs_operational": 4, 00:11:48.731 "base_bdevs_list": [ 00:11:48.731 { 00:11:48.731 "name": "BaseBdev1", 00:11:48.731 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:48.731 "is_configured": true, 00:11:48.731 "data_offset": 0, 00:11:48.731 "data_size": 65536 00:11:48.731 }, 00:11:48.731 { 00:11:48.731 "name": null, 00:11:48.731 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:48.731 "is_configured": false, 00:11:48.731 "data_offset": 0, 00:11:48.731 "data_size": 65536 00:11:48.731 }, 00:11:48.731 { 00:11:48.731 "name": "BaseBdev3", 00:11:48.731 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:48.731 "is_configured": true, 00:11:48.731 "data_offset": 0, 00:11:48.731 "data_size": 65536 00:11:48.731 }, 00:11:48.731 { 00:11:48.731 "name": "BaseBdev4", 00:11:48.731 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:48.731 "is_configured": true, 00:11:48.731 "data_offset": 0, 00:11:48.731 "data_size": 65536 00:11:48.731 } 00:11:48.731 ] 00:11:48.731 }' 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.731 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.991 [2024-11-26 20:24:42.493915] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.991 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.250 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.250 "name": "Existed_Raid", 00:11:49.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.250 "strip_size_kb": 0, 00:11:49.250 "state": "configuring", 00:11:49.250 "raid_level": "raid1", 00:11:49.250 "superblock": false, 00:11:49.250 "num_base_bdevs": 4, 00:11:49.250 "num_base_bdevs_discovered": 2, 00:11:49.250 "num_base_bdevs_operational": 4, 00:11:49.250 "base_bdevs_list": [ 00:11:49.250 { 00:11:49.250 "name": "BaseBdev1", 00:11:49.250 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:49.250 "is_configured": true, 00:11:49.250 "data_offset": 0, 00:11:49.250 "data_size": 65536 00:11:49.250 }, 00:11:49.250 { 00:11:49.250 "name": null, 00:11:49.250 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:49.250 "is_configured": false, 00:11:49.250 "data_offset": 0, 00:11:49.250 "data_size": 65536 00:11:49.250 }, 00:11:49.250 { 00:11:49.250 "name": null, 00:11:49.250 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:49.250 "is_configured": false, 00:11:49.250 "data_offset": 0, 00:11:49.250 "data_size": 65536 00:11:49.250 }, 00:11:49.250 { 00:11:49.250 "name": "BaseBdev4", 00:11:49.250 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:49.250 "is_configured": true, 00:11:49.250 "data_offset": 0, 00:11:49.250 "data_size": 65536 00:11:49.250 } 00:11:49.250 ] 00:11:49.250 }' 00:11:49.250 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.250 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.509 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.509 20:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.509 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.509 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.509 20:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.509 [2024-11-26 20:24:43.025040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.509 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.768 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.769 "name": "Existed_Raid", 00:11:49.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.769 "strip_size_kb": 0, 00:11:49.769 "state": "configuring", 00:11:49.769 "raid_level": "raid1", 00:11:49.769 "superblock": false, 00:11:49.769 "num_base_bdevs": 4, 00:11:49.769 "num_base_bdevs_discovered": 3, 00:11:49.769 "num_base_bdevs_operational": 4, 00:11:49.769 "base_bdevs_list": [ 00:11:49.769 { 00:11:49.769 "name": "BaseBdev1", 00:11:49.769 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:49.769 "is_configured": true, 00:11:49.769 "data_offset": 0, 00:11:49.769 "data_size": 65536 00:11:49.769 }, 00:11:49.769 { 00:11:49.769 "name": null, 00:11:49.769 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:49.769 "is_configured": false, 00:11:49.769 "data_offset": 0, 00:11:49.769 "data_size": 65536 00:11:49.769 }, 00:11:49.769 { 00:11:49.769 "name": "BaseBdev3", 00:11:49.769 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:49.769 "is_configured": true, 00:11:49.769 "data_offset": 0, 00:11:49.769 "data_size": 65536 00:11:49.769 }, 00:11:49.769 { 00:11:49.769 "name": "BaseBdev4", 00:11:49.769 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:49.769 "is_configured": true, 00:11:49.769 "data_offset": 0, 00:11:49.769 "data_size": 65536 00:11:49.769 } 00:11:49.769 ] 00:11:49.769 }' 00:11:49.769 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.769 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.028 [2024-11-26 20:24:43.548291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.028 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.288 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.288 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.288 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.288 "name": "Existed_Raid", 00:11:50.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.288 "strip_size_kb": 0, 00:11:50.288 "state": "configuring", 00:11:50.288 "raid_level": "raid1", 00:11:50.288 "superblock": false, 00:11:50.288 "num_base_bdevs": 4, 00:11:50.288 "num_base_bdevs_discovered": 2, 00:11:50.288 "num_base_bdevs_operational": 4, 00:11:50.288 "base_bdevs_list": [ 00:11:50.288 { 00:11:50.288 "name": null, 00:11:50.288 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:50.288 "is_configured": false, 00:11:50.288 "data_offset": 0, 00:11:50.288 "data_size": 65536 00:11:50.288 }, 00:11:50.288 { 00:11:50.288 "name": null, 00:11:50.288 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:50.288 "is_configured": false, 00:11:50.288 "data_offset": 0, 00:11:50.288 "data_size": 65536 00:11:50.288 }, 00:11:50.288 { 00:11:50.288 "name": "BaseBdev3", 00:11:50.288 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:50.288 "is_configured": true, 00:11:50.288 "data_offset": 0, 00:11:50.288 "data_size": 65536 00:11:50.288 }, 00:11:50.288 { 00:11:50.288 "name": "BaseBdev4", 00:11:50.288 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:50.288 "is_configured": true, 00:11:50.288 "data_offset": 0, 00:11:50.288 "data_size": 65536 00:11:50.288 } 00:11:50.288 ] 00:11:50.288 }' 00:11:50.288 20:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.288 20:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.546 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.546 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.546 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.546 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.546 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.546 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.547 [2024-11-26 20:24:44.059863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.547 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.805 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.805 "name": "Existed_Raid", 00:11:50.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.805 "strip_size_kb": 0, 00:11:50.805 "state": "configuring", 00:11:50.805 "raid_level": "raid1", 00:11:50.805 "superblock": false, 00:11:50.805 "num_base_bdevs": 4, 00:11:50.805 "num_base_bdevs_discovered": 3, 00:11:50.805 "num_base_bdevs_operational": 4, 00:11:50.805 "base_bdevs_list": [ 00:11:50.805 { 00:11:50.805 "name": null, 00:11:50.805 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:50.805 "is_configured": false, 00:11:50.805 "data_offset": 0, 00:11:50.805 "data_size": 65536 00:11:50.805 }, 00:11:50.805 { 00:11:50.805 "name": "BaseBdev2", 00:11:50.805 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:50.805 "is_configured": true, 00:11:50.805 "data_offset": 0, 00:11:50.805 "data_size": 65536 00:11:50.805 }, 00:11:50.805 { 00:11:50.805 "name": "BaseBdev3", 00:11:50.805 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:50.805 "is_configured": true, 00:11:50.805 "data_offset": 0, 00:11:50.805 "data_size": 65536 00:11:50.805 }, 00:11:50.805 { 00:11:50.805 "name": "BaseBdev4", 00:11:50.805 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:50.805 "is_configured": true, 00:11:50.805 "data_offset": 0, 00:11:50.805 "data_size": 65536 00:11:50.805 } 00:11:50.805 ] 00:11:50.805 }' 00:11:50.805 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.805 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7f2728d6-01a8-448a-b94e-437d4b26b156 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.065 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.324 [2024-11-26 20:24:44.620182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:51.324 [2024-11-26 20:24:44.620236] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:51.324 [2024-11-26 20:24:44.620248] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:51.324 [2024-11-26 20:24:44.620496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:51.324 [2024-11-26 20:24:44.620689] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:51.324 [2024-11-26 20:24:44.620703] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:51.324 [2024-11-26 20:24:44.620931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.324 NewBaseBdev 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.324 [ 00:11:51.324 { 00:11:51.324 "name": "NewBaseBdev", 00:11:51.324 "aliases": [ 00:11:51.324 "7f2728d6-01a8-448a-b94e-437d4b26b156" 00:11:51.324 ], 00:11:51.324 "product_name": "Malloc disk", 00:11:51.324 "block_size": 512, 00:11:51.324 "num_blocks": 65536, 00:11:51.324 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:51.324 "assigned_rate_limits": { 00:11:51.324 "rw_ios_per_sec": 0, 00:11:51.324 "rw_mbytes_per_sec": 0, 00:11:51.324 "r_mbytes_per_sec": 0, 00:11:51.324 "w_mbytes_per_sec": 0 00:11:51.324 }, 00:11:51.324 "claimed": true, 00:11:51.324 "claim_type": "exclusive_write", 00:11:51.324 "zoned": false, 00:11:51.324 "supported_io_types": { 00:11:51.324 "read": true, 00:11:51.324 "write": true, 00:11:51.324 "unmap": true, 00:11:51.324 "flush": true, 00:11:51.324 "reset": true, 00:11:51.324 "nvme_admin": false, 00:11:51.324 "nvme_io": false, 00:11:51.324 "nvme_io_md": false, 00:11:51.324 "write_zeroes": true, 00:11:51.324 "zcopy": true, 00:11:51.324 "get_zone_info": false, 00:11:51.324 "zone_management": false, 00:11:51.324 "zone_append": false, 00:11:51.324 "compare": false, 00:11:51.324 "compare_and_write": false, 00:11:51.324 "abort": true, 00:11:51.324 "seek_hole": false, 00:11:51.324 "seek_data": false, 00:11:51.324 "copy": true, 00:11:51.324 "nvme_iov_md": false 00:11:51.324 }, 00:11:51.324 "memory_domains": [ 00:11:51.324 { 00:11:51.324 "dma_device_id": "system", 00:11:51.324 "dma_device_type": 1 00:11:51.324 }, 00:11:51.324 { 00:11:51.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.324 "dma_device_type": 2 00:11:51.324 } 00:11:51.324 ], 00:11:51.324 "driver_specific": {} 00:11:51.324 } 00:11:51.324 ] 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.324 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.324 "name": "Existed_Raid", 00:11:51.324 "uuid": "dfcfe20b-e522-4215-9791-c4c1822f25ec", 00:11:51.324 "strip_size_kb": 0, 00:11:51.324 "state": "online", 00:11:51.324 "raid_level": "raid1", 00:11:51.324 "superblock": false, 00:11:51.324 "num_base_bdevs": 4, 00:11:51.324 "num_base_bdevs_discovered": 4, 00:11:51.324 "num_base_bdevs_operational": 4, 00:11:51.324 "base_bdevs_list": [ 00:11:51.324 { 00:11:51.324 "name": "NewBaseBdev", 00:11:51.324 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:51.324 "is_configured": true, 00:11:51.324 "data_offset": 0, 00:11:51.324 "data_size": 65536 00:11:51.325 }, 00:11:51.325 { 00:11:51.325 "name": "BaseBdev2", 00:11:51.325 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:51.325 "is_configured": true, 00:11:51.325 "data_offset": 0, 00:11:51.325 "data_size": 65536 00:11:51.325 }, 00:11:51.325 { 00:11:51.325 "name": "BaseBdev3", 00:11:51.325 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:51.325 "is_configured": true, 00:11:51.325 "data_offset": 0, 00:11:51.325 "data_size": 65536 00:11:51.325 }, 00:11:51.325 { 00:11:51.325 "name": "BaseBdev4", 00:11:51.325 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:51.325 "is_configured": true, 00:11:51.325 "data_offset": 0, 00:11:51.325 "data_size": 65536 00:11:51.325 } 00:11:51.325 ] 00:11:51.325 }' 00:11:51.325 20:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.325 20:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.584 [2024-11-26 20:24:45.107842] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.584 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.843 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.843 "name": "Existed_Raid", 00:11:51.843 "aliases": [ 00:11:51.843 "dfcfe20b-e522-4215-9791-c4c1822f25ec" 00:11:51.843 ], 00:11:51.843 "product_name": "Raid Volume", 00:11:51.843 "block_size": 512, 00:11:51.843 "num_blocks": 65536, 00:11:51.843 "uuid": "dfcfe20b-e522-4215-9791-c4c1822f25ec", 00:11:51.843 "assigned_rate_limits": { 00:11:51.843 "rw_ios_per_sec": 0, 00:11:51.843 "rw_mbytes_per_sec": 0, 00:11:51.843 "r_mbytes_per_sec": 0, 00:11:51.843 "w_mbytes_per_sec": 0 00:11:51.843 }, 00:11:51.843 "claimed": false, 00:11:51.843 "zoned": false, 00:11:51.843 "supported_io_types": { 00:11:51.843 "read": true, 00:11:51.843 "write": true, 00:11:51.843 "unmap": false, 00:11:51.843 "flush": false, 00:11:51.843 "reset": true, 00:11:51.843 "nvme_admin": false, 00:11:51.843 "nvme_io": false, 00:11:51.843 "nvme_io_md": false, 00:11:51.843 "write_zeroes": true, 00:11:51.843 "zcopy": false, 00:11:51.843 "get_zone_info": false, 00:11:51.843 "zone_management": false, 00:11:51.843 "zone_append": false, 00:11:51.843 "compare": false, 00:11:51.843 "compare_and_write": false, 00:11:51.843 "abort": false, 00:11:51.843 "seek_hole": false, 00:11:51.843 "seek_data": false, 00:11:51.843 "copy": false, 00:11:51.843 "nvme_iov_md": false 00:11:51.843 }, 00:11:51.843 "memory_domains": [ 00:11:51.843 { 00:11:51.843 "dma_device_id": "system", 00:11:51.843 "dma_device_type": 1 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.843 "dma_device_type": 2 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "dma_device_id": "system", 00:11:51.843 "dma_device_type": 1 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.843 "dma_device_type": 2 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "dma_device_id": "system", 00:11:51.843 "dma_device_type": 1 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.843 "dma_device_type": 2 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "dma_device_id": "system", 00:11:51.843 "dma_device_type": 1 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.843 "dma_device_type": 2 00:11:51.843 } 00:11:51.843 ], 00:11:51.843 "driver_specific": { 00:11:51.843 "raid": { 00:11:51.843 "uuid": "dfcfe20b-e522-4215-9791-c4c1822f25ec", 00:11:51.843 "strip_size_kb": 0, 00:11:51.843 "state": "online", 00:11:51.843 "raid_level": "raid1", 00:11:51.843 "superblock": false, 00:11:51.843 "num_base_bdevs": 4, 00:11:51.843 "num_base_bdevs_discovered": 4, 00:11:51.843 "num_base_bdevs_operational": 4, 00:11:51.843 "base_bdevs_list": [ 00:11:51.843 { 00:11:51.843 "name": "NewBaseBdev", 00:11:51.843 "uuid": "7f2728d6-01a8-448a-b94e-437d4b26b156", 00:11:51.843 "is_configured": true, 00:11:51.843 "data_offset": 0, 00:11:51.843 "data_size": 65536 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "name": "BaseBdev2", 00:11:51.843 "uuid": "a4a07ffd-cd53-4f80-b48a-9c550a7893ef", 00:11:51.843 "is_configured": true, 00:11:51.843 "data_offset": 0, 00:11:51.843 "data_size": 65536 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "name": "BaseBdev3", 00:11:51.843 "uuid": "28ce5202-0c28-4018-99de-4b710bd96dd9", 00:11:51.843 "is_configured": true, 00:11:51.843 "data_offset": 0, 00:11:51.843 "data_size": 65536 00:11:51.843 }, 00:11:51.843 { 00:11:51.843 "name": "BaseBdev4", 00:11:51.843 "uuid": "c6e7a691-93e6-4fad-86ce-6e2a4fc30a4f", 00:11:51.843 "is_configured": true, 00:11:51.843 "data_offset": 0, 00:11:51.843 "data_size": 65536 00:11:51.843 } 00:11:51.843 ] 00:11:51.843 } 00:11:51.843 } 00:11:51.843 }' 00:11:51.843 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:51.844 BaseBdev2 00:11:51.844 BaseBdev3 00:11:51.844 BaseBdev4' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.844 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.103 [2024-11-26 20:24:45.466839] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.103 [2024-11-26 20:24:45.466872] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.103 [2024-11-26 20:24:45.466982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.103 [2024-11-26 20:24:45.467298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.103 [2024-11-26 20:24:45.467319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84478 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84478 ']' 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84478 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84478 00:11:52.103 killing process with pid 84478 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84478' 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84478 00:11:52.103 [2024-11-26 20:24:45.514212] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.103 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84478 00:11:52.103 [2024-11-26 20:24:45.583185] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.673 20:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:52.673 ************************************ 00:11:52.673 END TEST raid_state_function_test 00:11:52.673 ************************************ 00:11:52.673 00:11:52.673 real 0m10.259s 00:11:52.673 user 0m17.069s 00:11:52.673 sys 0m2.412s 00:11:52.673 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.673 20:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.673 20:24:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:52.673 20:24:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:52.673 20:24:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.673 20:24:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.673 ************************************ 00:11:52.673 START TEST raid_state_function_test_sb 00:11:52.673 ************************************ 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:52.673 Process raid pid: 85133 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=85133 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85133' 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 85133 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85133 ']' 00:11:52.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.673 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.673 [2024-11-26 20:24:46.115114] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:52.673 [2024-11-26 20:24:46.115269] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.933 [2024-11-26 20:24:46.277439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.933 [2024-11-26 20:24:46.356274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.933 [2024-11-26 20:24:46.434187] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.933 [2024-11-26 20:24:46.434223] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.500 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.500 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:53.500 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.500 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.500 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.500 [2024-11-26 20:24:46.991026] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.501 [2024-11-26 20:24:46.991079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.501 [2024-11-26 20:24:46.991102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.501 [2024-11-26 20:24:46.991115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.501 [2024-11-26 20:24:46.991125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.501 [2024-11-26 20:24:46.991139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.501 [2024-11-26 20:24:46.991147] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.501 [2024-11-26 20:24:46.991157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.501 20:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.501 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.501 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.501 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.501 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.501 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.501 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.501 "name": "Existed_Raid", 00:11:53.501 "uuid": "977b63f6-44f8-4acf-b487-9c89e10b4ce0", 00:11:53.501 "strip_size_kb": 0, 00:11:53.501 "state": "configuring", 00:11:53.501 "raid_level": "raid1", 00:11:53.501 "superblock": true, 00:11:53.501 "num_base_bdevs": 4, 00:11:53.501 "num_base_bdevs_discovered": 0, 00:11:53.501 "num_base_bdevs_operational": 4, 00:11:53.501 "base_bdevs_list": [ 00:11:53.501 { 00:11:53.501 "name": "BaseBdev1", 00:11:53.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.501 "is_configured": false, 00:11:53.501 "data_offset": 0, 00:11:53.501 "data_size": 0 00:11:53.501 }, 00:11:53.501 { 00:11:53.501 "name": "BaseBdev2", 00:11:53.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.501 "is_configured": false, 00:11:53.501 "data_offset": 0, 00:11:53.501 "data_size": 0 00:11:53.501 }, 00:11:53.501 { 00:11:53.501 "name": "BaseBdev3", 00:11:53.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.501 "is_configured": false, 00:11:53.501 "data_offset": 0, 00:11:53.501 "data_size": 0 00:11:53.501 }, 00:11:53.501 { 00:11:53.501 "name": "BaseBdev4", 00:11:53.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.501 "is_configured": false, 00:11:53.501 "data_offset": 0, 00:11:53.501 "data_size": 0 00:11:53.501 } 00:11:53.501 ] 00:11:53.501 }' 00:11:53.501 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.501 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.069 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.069 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.069 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.069 [2024-11-26 20:24:47.430176] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.069 [2024-11-26 20:24:47.430224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.070 [2024-11-26 20:24:47.438199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.070 [2024-11-26 20:24:47.438239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.070 [2024-11-26 20:24:47.438248] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.070 [2024-11-26 20:24:47.438257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.070 [2024-11-26 20:24:47.438264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.070 [2024-11-26 20:24:47.438272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.070 [2024-11-26 20:24:47.438279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.070 [2024-11-26 20:24:47.438287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.070 [2024-11-26 20:24:47.458243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.070 BaseBdev1 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.070 [ 00:11:54.070 { 00:11:54.070 "name": "BaseBdev1", 00:11:54.070 "aliases": [ 00:11:54.070 "4dc2e761-52b7-4576-af0a-5cdf26d8fc41" 00:11:54.070 ], 00:11:54.070 "product_name": "Malloc disk", 00:11:54.070 "block_size": 512, 00:11:54.070 "num_blocks": 65536, 00:11:54.070 "uuid": "4dc2e761-52b7-4576-af0a-5cdf26d8fc41", 00:11:54.070 "assigned_rate_limits": { 00:11:54.070 "rw_ios_per_sec": 0, 00:11:54.070 "rw_mbytes_per_sec": 0, 00:11:54.070 "r_mbytes_per_sec": 0, 00:11:54.070 "w_mbytes_per_sec": 0 00:11:54.070 }, 00:11:54.070 "claimed": true, 00:11:54.070 "claim_type": "exclusive_write", 00:11:54.070 "zoned": false, 00:11:54.070 "supported_io_types": { 00:11:54.070 "read": true, 00:11:54.070 "write": true, 00:11:54.070 "unmap": true, 00:11:54.070 "flush": true, 00:11:54.070 "reset": true, 00:11:54.070 "nvme_admin": false, 00:11:54.070 "nvme_io": false, 00:11:54.070 "nvme_io_md": false, 00:11:54.070 "write_zeroes": true, 00:11:54.070 "zcopy": true, 00:11:54.070 "get_zone_info": false, 00:11:54.070 "zone_management": false, 00:11:54.070 "zone_append": false, 00:11:54.070 "compare": false, 00:11:54.070 "compare_and_write": false, 00:11:54.070 "abort": true, 00:11:54.070 "seek_hole": false, 00:11:54.070 "seek_data": false, 00:11:54.070 "copy": true, 00:11:54.070 "nvme_iov_md": false 00:11:54.070 }, 00:11:54.070 "memory_domains": [ 00:11:54.070 { 00:11:54.070 "dma_device_id": "system", 00:11:54.070 "dma_device_type": 1 00:11:54.070 }, 00:11:54.070 { 00:11:54.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.070 "dma_device_type": 2 00:11:54.070 } 00:11:54.070 ], 00:11:54.070 "driver_specific": {} 00:11:54.070 } 00:11:54.070 ] 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.070 "name": "Existed_Raid", 00:11:54.070 "uuid": "096f1ed1-250c-4689-8a01-ffc149eb0d64", 00:11:54.070 "strip_size_kb": 0, 00:11:54.070 "state": "configuring", 00:11:54.070 "raid_level": "raid1", 00:11:54.070 "superblock": true, 00:11:54.070 "num_base_bdevs": 4, 00:11:54.070 "num_base_bdevs_discovered": 1, 00:11:54.070 "num_base_bdevs_operational": 4, 00:11:54.070 "base_bdevs_list": [ 00:11:54.070 { 00:11:54.070 "name": "BaseBdev1", 00:11:54.070 "uuid": "4dc2e761-52b7-4576-af0a-5cdf26d8fc41", 00:11:54.070 "is_configured": true, 00:11:54.070 "data_offset": 2048, 00:11:54.070 "data_size": 63488 00:11:54.070 }, 00:11:54.070 { 00:11:54.070 "name": "BaseBdev2", 00:11:54.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.070 "is_configured": false, 00:11:54.070 "data_offset": 0, 00:11:54.070 "data_size": 0 00:11:54.070 }, 00:11:54.070 { 00:11:54.070 "name": "BaseBdev3", 00:11:54.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.070 "is_configured": false, 00:11:54.070 "data_offset": 0, 00:11:54.070 "data_size": 0 00:11:54.070 }, 00:11:54.070 { 00:11:54.070 "name": "BaseBdev4", 00:11:54.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.070 "is_configured": false, 00:11:54.070 "data_offset": 0, 00:11:54.070 "data_size": 0 00:11:54.070 } 00:11:54.070 ] 00:11:54.070 }' 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.070 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.640 [2024-11-26 20:24:47.969435] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.640 [2024-11-26 20:24:47.969501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.640 [2024-11-26 20:24:47.981448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.640 [2024-11-26 20:24:47.983387] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.640 [2024-11-26 20:24:47.983429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.640 [2024-11-26 20:24:47.983439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.640 [2024-11-26 20:24:47.983447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.640 [2024-11-26 20:24:47.983453] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.640 [2024-11-26 20:24:47.983462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.640 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.641 20:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.641 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.641 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.641 "name": "Existed_Raid", 00:11:54.641 "uuid": "aaf71d0b-0720-4120-99fc-e4fad89b3667", 00:11:54.641 "strip_size_kb": 0, 00:11:54.641 "state": "configuring", 00:11:54.641 "raid_level": "raid1", 00:11:54.641 "superblock": true, 00:11:54.641 "num_base_bdevs": 4, 00:11:54.641 "num_base_bdevs_discovered": 1, 00:11:54.641 "num_base_bdevs_operational": 4, 00:11:54.641 "base_bdevs_list": [ 00:11:54.641 { 00:11:54.641 "name": "BaseBdev1", 00:11:54.641 "uuid": "4dc2e761-52b7-4576-af0a-5cdf26d8fc41", 00:11:54.641 "is_configured": true, 00:11:54.641 "data_offset": 2048, 00:11:54.641 "data_size": 63488 00:11:54.641 }, 00:11:54.641 { 00:11:54.641 "name": "BaseBdev2", 00:11:54.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.641 "is_configured": false, 00:11:54.641 "data_offset": 0, 00:11:54.641 "data_size": 0 00:11:54.641 }, 00:11:54.641 { 00:11:54.641 "name": "BaseBdev3", 00:11:54.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.641 "is_configured": false, 00:11:54.641 "data_offset": 0, 00:11:54.641 "data_size": 0 00:11:54.641 }, 00:11:54.641 { 00:11:54.641 "name": "BaseBdev4", 00:11:54.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.641 "is_configured": false, 00:11:54.641 "data_offset": 0, 00:11:54.641 "data_size": 0 00:11:54.641 } 00:11:54.641 ] 00:11:54.641 }' 00:11:54.641 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.641 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.900 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.900 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.900 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.159 [2024-11-26 20:24:48.458803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.159 BaseBdev2 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.159 [ 00:11:55.159 { 00:11:55.159 "name": "BaseBdev2", 00:11:55.159 "aliases": [ 00:11:55.159 "46473c60-1b5b-4b91-95a6-6fee11d493f2" 00:11:55.159 ], 00:11:55.159 "product_name": "Malloc disk", 00:11:55.159 "block_size": 512, 00:11:55.159 "num_blocks": 65536, 00:11:55.159 "uuid": "46473c60-1b5b-4b91-95a6-6fee11d493f2", 00:11:55.159 "assigned_rate_limits": { 00:11:55.159 "rw_ios_per_sec": 0, 00:11:55.159 "rw_mbytes_per_sec": 0, 00:11:55.159 "r_mbytes_per_sec": 0, 00:11:55.159 "w_mbytes_per_sec": 0 00:11:55.159 }, 00:11:55.159 "claimed": true, 00:11:55.159 "claim_type": "exclusive_write", 00:11:55.159 "zoned": false, 00:11:55.159 "supported_io_types": { 00:11:55.159 "read": true, 00:11:55.159 "write": true, 00:11:55.159 "unmap": true, 00:11:55.159 "flush": true, 00:11:55.159 "reset": true, 00:11:55.159 "nvme_admin": false, 00:11:55.159 "nvme_io": false, 00:11:55.159 "nvme_io_md": false, 00:11:55.159 "write_zeroes": true, 00:11:55.159 "zcopy": true, 00:11:55.159 "get_zone_info": false, 00:11:55.159 "zone_management": false, 00:11:55.159 "zone_append": false, 00:11:55.159 "compare": false, 00:11:55.159 "compare_and_write": false, 00:11:55.159 "abort": true, 00:11:55.159 "seek_hole": false, 00:11:55.159 "seek_data": false, 00:11:55.159 "copy": true, 00:11:55.159 "nvme_iov_md": false 00:11:55.159 }, 00:11:55.159 "memory_domains": [ 00:11:55.159 { 00:11:55.159 "dma_device_id": "system", 00:11:55.159 "dma_device_type": 1 00:11:55.159 }, 00:11:55.159 { 00:11:55.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.159 "dma_device_type": 2 00:11:55.159 } 00:11:55.159 ], 00:11:55.159 "driver_specific": {} 00:11:55.159 } 00:11:55.159 ] 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.159 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.159 "name": "Existed_Raid", 00:11:55.159 "uuid": "aaf71d0b-0720-4120-99fc-e4fad89b3667", 00:11:55.159 "strip_size_kb": 0, 00:11:55.160 "state": "configuring", 00:11:55.160 "raid_level": "raid1", 00:11:55.160 "superblock": true, 00:11:55.160 "num_base_bdevs": 4, 00:11:55.160 "num_base_bdevs_discovered": 2, 00:11:55.160 "num_base_bdevs_operational": 4, 00:11:55.160 "base_bdevs_list": [ 00:11:55.160 { 00:11:55.160 "name": "BaseBdev1", 00:11:55.160 "uuid": "4dc2e761-52b7-4576-af0a-5cdf26d8fc41", 00:11:55.160 "is_configured": true, 00:11:55.160 "data_offset": 2048, 00:11:55.160 "data_size": 63488 00:11:55.160 }, 00:11:55.160 { 00:11:55.160 "name": "BaseBdev2", 00:11:55.160 "uuid": "46473c60-1b5b-4b91-95a6-6fee11d493f2", 00:11:55.160 "is_configured": true, 00:11:55.160 "data_offset": 2048, 00:11:55.160 "data_size": 63488 00:11:55.160 }, 00:11:55.160 { 00:11:55.160 "name": "BaseBdev3", 00:11:55.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.160 "is_configured": false, 00:11:55.160 "data_offset": 0, 00:11:55.160 "data_size": 0 00:11:55.160 }, 00:11:55.160 { 00:11:55.160 "name": "BaseBdev4", 00:11:55.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.160 "is_configured": false, 00:11:55.160 "data_offset": 0, 00:11:55.160 "data_size": 0 00:11:55.160 } 00:11:55.160 ] 00:11:55.160 }' 00:11:55.160 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.160 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.419 [2024-11-26 20:24:48.899131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.419 BaseBdev3 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.419 [ 00:11:55.419 { 00:11:55.419 "name": "BaseBdev3", 00:11:55.419 "aliases": [ 00:11:55.419 "dcdbdd1b-1a18-4894-ae07-612f53996063" 00:11:55.419 ], 00:11:55.419 "product_name": "Malloc disk", 00:11:55.419 "block_size": 512, 00:11:55.419 "num_blocks": 65536, 00:11:55.419 "uuid": "dcdbdd1b-1a18-4894-ae07-612f53996063", 00:11:55.419 "assigned_rate_limits": { 00:11:55.419 "rw_ios_per_sec": 0, 00:11:55.419 "rw_mbytes_per_sec": 0, 00:11:55.419 "r_mbytes_per_sec": 0, 00:11:55.419 "w_mbytes_per_sec": 0 00:11:55.419 }, 00:11:55.419 "claimed": true, 00:11:55.419 "claim_type": "exclusive_write", 00:11:55.419 "zoned": false, 00:11:55.419 "supported_io_types": { 00:11:55.419 "read": true, 00:11:55.419 "write": true, 00:11:55.419 "unmap": true, 00:11:55.419 "flush": true, 00:11:55.419 "reset": true, 00:11:55.419 "nvme_admin": false, 00:11:55.419 "nvme_io": false, 00:11:55.419 "nvme_io_md": false, 00:11:55.419 "write_zeroes": true, 00:11:55.419 "zcopy": true, 00:11:55.419 "get_zone_info": false, 00:11:55.419 "zone_management": false, 00:11:55.419 "zone_append": false, 00:11:55.419 "compare": false, 00:11:55.419 "compare_and_write": false, 00:11:55.419 "abort": true, 00:11:55.419 "seek_hole": false, 00:11:55.419 "seek_data": false, 00:11:55.419 "copy": true, 00:11:55.419 "nvme_iov_md": false 00:11:55.419 }, 00:11:55.419 "memory_domains": [ 00:11:55.419 { 00:11:55.419 "dma_device_id": "system", 00:11:55.419 "dma_device_type": 1 00:11:55.419 }, 00:11:55.419 { 00:11:55.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.419 "dma_device_type": 2 00:11:55.419 } 00:11:55.419 ], 00:11:55.419 "driver_specific": {} 00:11:55.419 } 00:11:55.419 ] 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.419 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.420 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.420 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.420 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.420 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.420 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.420 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.420 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.420 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.679 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.679 "name": "Existed_Raid", 00:11:55.679 "uuid": "aaf71d0b-0720-4120-99fc-e4fad89b3667", 00:11:55.679 "strip_size_kb": 0, 00:11:55.679 "state": "configuring", 00:11:55.679 "raid_level": "raid1", 00:11:55.679 "superblock": true, 00:11:55.679 "num_base_bdevs": 4, 00:11:55.679 "num_base_bdevs_discovered": 3, 00:11:55.679 "num_base_bdevs_operational": 4, 00:11:55.679 "base_bdevs_list": [ 00:11:55.679 { 00:11:55.679 "name": "BaseBdev1", 00:11:55.679 "uuid": "4dc2e761-52b7-4576-af0a-5cdf26d8fc41", 00:11:55.679 "is_configured": true, 00:11:55.679 "data_offset": 2048, 00:11:55.679 "data_size": 63488 00:11:55.679 }, 00:11:55.679 { 00:11:55.679 "name": "BaseBdev2", 00:11:55.679 "uuid": "46473c60-1b5b-4b91-95a6-6fee11d493f2", 00:11:55.679 "is_configured": true, 00:11:55.679 "data_offset": 2048, 00:11:55.679 "data_size": 63488 00:11:55.679 }, 00:11:55.679 { 00:11:55.679 "name": "BaseBdev3", 00:11:55.679 "uuid": "dcdbdd1b-1a18-4894-ae07-612f53996063", 00:11:55.679 "is_configured": true, 00:11:55.679 "data_offset": 2048, 00:11:55.679 "data_size": 63488 00:11:55.679 }, 00:11:55.679 { 00:11:55.679 "name": "BaseBdev4", 00:11:55.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.679 "is_configured": false, 00:11:55.679 "data_offset": 0, 00:11:55.679 "data_size": 0 00:11:55.679 } 00:11:55.679 ] 00:11:55.679 }' 00:11:55.679 20:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.679 20:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.937 [2024-11-26 20:24:49.443509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.937 [2024-11-26 20:24:49.443749] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:55.937 [2024-11-26 20:24:49.443777] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.937 [2024-11-26 20:24:49.444082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:55.937 BaseBdev4 00:11:55.937 [2024-11-26 20:24:49.444238] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:55.937 [2024-11-26 20:24:49.444264] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:55.937 [2024-11-26 20:24:49.444425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.937 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.937 [ 00:11:55.937 { 00:11:55.937 "name": "BaseBdev4", 00:11:55.937 "aliases": [ 00:11:55.937 "be194454-a62a-4da3-a8bd-ca23cd42f29a" 00:11:55.937 ], 00:11:55.937 "product_name": "Malloc disk", 00:11:55.937 "block_size": 512, 00:11:55.937 "num_blocks": 65536, 00:11:55.937 "uuid": "be194454-a62a-4da3-a8bd-ca23cd42f29a", 00:11:55.937 "assigned_rate_limits": { 00:11:55.937 "rw_ios_per_sec": 0, 00:11:55.937 "rw_mbytes_per_sec": 0, 00:11:55.937 "r_mbytes_per_sec": 0, 00:11:55.937 "w_mbytes_per_sec": 0 00:11:55.937 }, 00:11:55.937 "claimed": true, 00:11:55.937 "claim_type": "exclusive_write", 00:11:55.937 "zoned": false, 00:11:55.937 "supported_io_types": { 00:11:55.937 "read": true, 00:11:55.937 "write": true, 00:11:55.937 "unmap": true, 00:11:55.937 "flush": true, 00:11:55.937 "reset": true, 00:11:55.937 "nvme_admin": false, 00:11:55.937 "nvme_io": false, 00:11:55.937 "nvme_io_md": false, 00:11:55.937 "write_zeroes": true, 00:11:55.937 "zcopy": true, 00:11:55.937 "get_zone_info": false, 00:11:55.937 "zone_management": false, 00:11:55.937 "zone_append": false, 00:11:55.937 "compare": false, 00:11:55.937 "compare_and_write": false, 00:11:55.937 "abort": true, 00:11:55.937 "seek_hole": false, 00:11:55.937 "seek_data": false, 00:11:55.937 "copy": true, 00:11:55.937 "nvme_iov_md": false 00:11:55.937 }, 00:11:55.937 "memory_domains": [ 00:11:55.937 { 00:11:55.937 "dma_device_id": "system", 00:11:55.937 "dma_device_type": 1 00:11:55.937 }, 00:11:55.937 { 00:11:55.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.937 "dma_device_type": 2 00:11:55.937 } 00:11:55.937 ], 00:11:55.937 "driver_specific": {} 00:11:55.938 } 00:11:55.938 ] 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.938 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.196 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.196 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.196 "name": "Existed_Raid", 00:11:56.196 "uuid": "aaf71d0b-0720-4120-99fc-e4fad89b3667", 00:11:56.196 "strip_size_kb": 0, 00:11:56.196 "state": "online", 00:11:56.196 "raid_level": "raid1", 00:11:56.196 "superblock": true, 00:11:56.196 "num_base_bdevs": 4, 00:11:56.196 "num_base_bdevs_discovered": 4, 00:11:56.196 "num_base_bdevs_operational": 4, 00:11:56.196 "base_bdevs_list": [ 00:11:56.196 { 00:11:56.196 "name": "BaseBdev1", 00:11:56.196 "uuid": "4dc2e761-52b7-4576-af0a-5cdf26d8fc41", 00:11:56.196 "is_configured": true, 00:11:56.196 "data_offset": 2048, 00:11:56.196 "data_size": 63488 00:11:56.196 }, 00:11:56.196 { 00:11:56.196 "name": "BaseBdev2", 00:11:56.196 "uuid": "46473c60-1b5b-4b91-95a6-6fee11d493f2", 00:11:56.196 "is_configured": true, 00:11:56.196 "data_offset": 2048, 00:11:56.196 "data_size": 63488 00:11:56.196 }, 00:11:56.196 { 00:11:56.196 "name": "BaseBdev3", 00:11:56.196 "uuid": "dcdbdd1b-1a18-4894-ae07-612f53996063", 00:11:56.196 "is_configured": true, 00:11:56.196 "data_offset": 2048, 00:11:56.196 "data_size": 63488 00:11:56.196 }, 00:11:56.196 { 00:11:56.196 "name": "BaseBdev4", 00:11:56.196 "uuid": "be194454-a62a-4da3-a8bd-ca23cd42f29a", 00:11:56.196 "is_configured": true, 00:11:56.196 "data_offset": 2048, 00:11:56.196 "data_size": 63488 00:11:56.196 } 00:11:56.196 ] 00:11:56.196 }' 00:11:56.196 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.196 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.456 [2024-11-26 20:24:49.955060] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.456 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.456 "name": "Existed_Raid", 00:11:56.456 "aliases": [ 00:11:56.456 "aaf71d0b-0720-4120-99fc-e4fad89b3667" 00:11:56.456 ], 00:11:56.456 "product_name": "Raid Volume", 00:11:56.456 "block_size": 512, 00:11:56.456 "num_blocks": 63488, 00:11:56.456 "uuid": "aaf71d0b-0720-4120-99fc-e4fad89b3667", 00:11:56.456 "assigned_rate_limits": { 00:11:56.456 "rw_ios_per_sec": 0, 00:11:56.456 "rw_mbytes_per_sec": 0, 00:11:56.456 "r_mbytes_per_sec": 0, 00:11:56.456 "w_mbytes_per_sec": 0 00:11:56.456 }, 00:11:56.456 "claimed": false, 00:11:56.456 "zoned": false, 00:11:56.456 "supported_io_types": { 00:11:56.456 "read": true, 00:11:56.456 "write": true, 00:11:56.456 "unmap": false, 00:11:56.456 "flush": false, 00:11:56.456 "reset": true, 00:11:56.456 "nvme_admin": false, 00:11:56.456 "nvme_io": false, 00:11:56.456 "nvme_io_md": false, 00:11:56.456 "write_zeroes": true, 00:11:56.456 "zcopy": false, 00:11:56.456 "get_zone_info": false, 00:11:56.456 "zone_management": false, 00:11:56.456 "zone_append": false, 00:11:56.456 "compare": false, 00:11:56.456 "compare_and_write": false, 00:11:56.456 "abort": false, 00:11:56.456 "seek_hole": false, 00:11:56.456 "seek_data": false, 00:11:56.456 "copy": false, 00:11:56.456 "nvme_iov_md": false 00:11:56.456 }, 00:11:56.456 "memory_domains": [ 00:11:56.456 { 00:11:56.456 "dma_device_id": "system", 00:11:56.457 "dma_device_type": 1 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.457 "dma_device_type": 2 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "dma_device_id": "system", 00:11:56.457 "dma_device_type": 1 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.457 "dma_device_type": 2 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "dma_device_id": "system", 00:11:56.457 "dma_device_type": 1 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.457 "dma_device_type": 2 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "dma_device_id": "system", 00:11:56.457 "dma_device_type": 1 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.457 "dma_device_type": 2 00:11:56.457 } 00:11:56.457 ], 00:11:56.457 "driver_specific": { 00:11:56.457 "raid": { 00:11:56.457 "uuid": "aaf71d0b-0720-4120-99fc-e4fad89b3667", 00:11:56.457 "strip_size_kb": 0, 00:11:56.457 "state": "online", 00:11:56.457 "raid_level": "raid1", 00:11:56.457 "superblock": true, 00:11:56.457 "num_base_bdevs": 4, 00:11:56.457 "num_base_bdevs_discovered": 4, 00:11:56.457 "num_base_bdevs_operational": 4, 00:11:56.457 "base_bdevs_list": [ 00:11:56.457 { 00:11:56.457 "name": "BaseBdev1", 00:11:56.457 "uuid": "4dc2e761-52b7-4576-af0a-5cdf26d8fc41", 00:11:56.457 "is_configured": true, 00:11:56.457 "data_offset": 2048, 00:11:56.457 "data_size": 63488 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "name": "BaseBdev2", 00:11:56.457 "uuid": "46473c60-1b5b-4b91-95a6-6fee11d493f2", 00:11:56.457 "is_configured": true, 00:11:56.457 "data_offset": 2048, 00:11:56.457 "data_size": 63488 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "name": "BaseBdev3", 00:11:56.457 "uuid": "dcdbdd1b-1a18-4894-ae07-612f53996063", 00:11:56.457 "is_configured": true, 00:11:56.457 "data_offset": 2048, 00:11:56.457 "data_size": 63488 00:11:56.457 }, 00:11:56.457 { 00:11:56.457 "name": "BaseBdev4", 00:11:56.457 "uuid": "be194454-a62a-4da3-a8bd-ca23cd42f29a", 00:11:56.457 "is_configured": true, 00:11:56.457 "data_offset": 2048, 00:11:56.457 "data_size": 63488 00:11:56.457 } 00:11:56.457 ] 00:11:56.457 } 00:11:56.457 } 00:11:56.457 }' 00:11:56.457 20:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:56.716 BaseBdev2 00:11:56.716 BaseBdev3 00:11:56.716 BaseBdev4' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:56.716 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.717 [2024-11-26 20:24:50.210327] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.717 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.977 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.977 "name": "Existed_Raid", 00:11:56.977 "uuid": "aaf71d0b-0720-4120-99fc-e4fad89b3667", 00:11:56.977 "strip_size_kb": 0, 00:11:56.977 "state": "online", 00:11:56.977 "raid_level": "raid1", 00:11:56.977 "superblock": true, 00:11:56.977 "num_base_bdevs": 4, 00:11:56.977 "num_base_bdevs_discovered": 3, 00:11:56.977 "num_base_bdevs_operational": 3, 00:11:56.977 "base_bdevs_list": [ 00:11:56.977 { 00:11:56.977 "name": null, 00:11:56.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.977 "is_configured": false, 00:11:56.977 "data_offset": 0, 00:11:56.977 "data_size": 63488 00:11:56.977 }, 00:11:56.977 { 00:11:56.977 "name": "BaseBdev2", 00:11:56.977 "uuid": "46473c60-1b5b-4b91-95a6-6fee11d493f2", 00:11:56.977 "is_configured": true, 00:11:56.977 "data_offset": 2048, 00:11:56.977 "data_size": 63488 00:11:56.977 }, 00:11:56.977 { 00:11:56.977 "name": "BaseBdev3", 00:11:56.977 "uuid": "dcdbdd1b-1a18-4894-ae07-612f53996063", 00:11:56.977 "is_configured": true, 00:11:56.977 "data_offset": 2048, 00:11:56.977 "data_size": 63488 00:11:56.977 }, 00:11:56.977 { 00:11:56.977 "name": "BaseBdev4", 00:11:56.977 "uuid": "be194454-a62a-4da3-a8bd-ca23cd42f29a", 00:11:56.977 "is_configured": true, 00:11:56.977 "data_offset": 2048, 00:11:56.977 "data_size": 63488 00:11:56.977 } 00:11:56.977 ] 00:11:56.977 }' 00:11:56.977 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.977 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.236 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:57.236 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 [2024-11-26 20:24:50.689346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.237 [2024-11-26 20:24:50.766983] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.237 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 [2024-11-26 20:24:50.833327] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:57.496 [2024-11-26 20:24:50.833456] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.496 [2024-11-26 20:24:50.855689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.496 [2024-11-26 20:24:50.855744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.496 [2024-11-26 20:24:50.855762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 BaseBdev2 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.496 [ 00:11:57.496 { 00:11:57.496 "name": "BaseBdev2", 00:11:57.496 "aliases": [ 00:11:57.496 "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d" 00:11:57.496 ], 00:11:57.496 "product_name": "Malloc disk", 00:11:57.496 "block_size": 512, 00:11:57.496 "num_blocks": 65536, 00:11:57.496 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:11:57.496 "assigned_rate_limits": { 00:11:57.496 "rw_ios_per_sec": 0, 00:11:57.496 "rw_mbytes_per_sec": 0, 00:11:57.496 "r_mbytes_per_sec": 0, 00:11:57.496 "w_mbytes_per_sec": 0 00:11:57.496 }, 00:11:57.496 "claimed": false, 00:11:57.496 "zoned": false, 00:11:57.496 "supported_io_types": { 00:11:57.496 "read": true, 00:11:57.496 "write": true, 00:11:57.496 "unmap": true, 00:11:57.496 "flush": true, 00:11:57.496 "reset": true, 00:11:57.496 "nvme_admin": false, 00:11:57.496 "nvme_io": false, 00:11:57.496 "nvme_io_md": false, 00:11:57.496 "write_zeroes": true, 00:11:57.496 "zcopy": true, 00:11:57.496 "get_zone_info": false, 00:11:57.496 "zone_management": false, 00:11:57.496 "zone_append": false, 00:11:57.496 "compare": false, 00:11:57.496 "compare_and_write": false, 00:11:57.496 "abort": true, 00:11:57.496 "seek_hole": false, 00:11:57.496 "seek_data": false, 00:11:57.496 "copy": true, 00:11:57.496 "nvme_iov_md": false 00:11:57.496 }, 00:11:57.496 "memory_domains": [ 00:11:57.496 { 00:11:57.496 "dma_device_id": "system", 00:11:57.496 "dma_device_type": 1 00:11:57.496 }, 00:11:57.496 { 00:11:57.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.496 "dma_device_type": 2 00:11:57.496 } 00:11:57.496 ], 00:11:57.496 "driver_specific": {} 00:11:57.496 } 00:11:57.496 ] 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.496 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 BaseBdev3 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.497 20:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 [ 00:11:57.497 { 00:11:57.497 "name": "BaseBdev3", 00:11:57.497 "aliases": [ 00:11:57.497 "096c4927-1656-46a6-8db1-2df9bcfbf567" 00:11:57.497 ], 00:11:57.497 "product_name": "Malloc disk", 00:11:57.497 "block_size": 512, 00:11:57.497 "num_blocks": 65536, 00:11:57.497 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:11:57.497 "assigned_rate_limits": { 00:11:57.497 "rw_ios_per_sec": 0, 00:11:57.497 "rw_mbytes_per_sec": 0, 00:11:57.497 "r_mbytes_per_sec": 0, 00:11:57.497 "w_mbytes_per_sec": 0 00:11:57.497 }, 00:11:57.497 "claimed": false, 00:11:57.497 "zoned": false, 00:11:57.497 "supported_io_types": { 00:11:57.497 "read": true, 00:11:57.497 "write": true, 00:11:57.497 "unmap": true, 00:11:57.497 "flush": true, 00:11:57.497 "reset": true, 00:11:57.497 "nvme_admin": false, 00:11:57.497 "nvme_io": false, 00:11:57.497 "nvme_io_md": false, 00:11:57.497 "write_zeroes": true, 00:11:57.497 "zcopy": true, 00:11:57.497 "get_zone_info": false, 00:11:57.497 "zone_management": false, 00:11:57.497 "zone_append": false, 00:11:57.497 "compare": false, 00:11:57.497 "compare_and_write": false, 00:11:57.497 "abort": true, 00:11:57.497 "seek_hole": false, 00:11:57.497 "seek_data": false, 00:11:57.497 "copy": true, 00:11:57.497 "nvme_iov_md": false 00:11:57.497 }, 00:11:57.497 "memory_domains": [ 00:11:57.497 { 00:11:57.497 "dma_device_id": "system", 00:11:57.497 "dma_device_type": 1 00:11:57.497 }, 00:11:57.497 { 00:11:57.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.497 "dma_device_type": 2 00:11:57.497 } 00:11:57.497 ], 00:11:57.497 "driver_specific": {} 00:11:57.497 } 00:11:57.497 ] 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.497 BaseBdev4 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.497 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.756 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.756 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:57.756 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.756 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.756 [ 00:11:57.756 { 00:11:57.756 "name": "BaseBdev4", 00:11:57.756 "aliases": [ 00:11:57.756 "c8a8701f-f37c-474d-a19f-9436b99b236f" 00:11:57.756 ], 00:11:57.756 "product_name": "Malloc disk", 00:11:57.756 "block_size": 512, 00:11:57.756 "num_blocks": 65536, 00:11:57.756 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:11:57.756 "assigned_rate_limits": { 00:11:57.756 "rw_ios_per_sec": 0, 00:11:57.756 "rw_mbytes_per_sec": 0, 00:11:57.756 "r_mbytes_per_sec": 0, 00:11:57.756 "w_mbytes_per_sec": 0 00:11:57.756 }, 00:11:57.756 "claimed": false, 00:11:57.756 "zoned": false, 00:11:57.756 "supported_io_types": { 00:11:57.756 "read": true, 00:11:57.756 "write": true, 00:11:57.756 "unmap": true, 00:11:57.756 "flush": true, 00:11:57.756 "reset": true, 00:11:57.756 "nvme_admin": false, 00:11:57.756 "nvme_io": false, 00:11:57.756 "nvme_io_md": false, 00:11:57.756 "write_zeroes": true, 00:11:57.756 "zcopy": true, 00:11:57.756 "get_zone_info": false, 00:11:57.756 "zone_management": false, 00:11:57.756 "zone_append": false, 00:11:57.756 "compare": false, 00:11:57.756 "compare_and_write": false, 00:11:57.756 "abort": true, 00:11:57.756 "seek_hole": false, 00:11:57.756 "seek_data": false, 00:11:57.756 "copy": true, 00:11:57.756 "nvme_iov_md": false 00:11:57.756 }, 00:11:57.756 "memory_domains": [ 00:11:57.756 { 00:11:57.756 "dma_device_id": "system", 00:11:57.756 "dma_device_type": 1 00:11:57.756 }, 00:11:57.756 { 00:11:57.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.756 "dma_device_type": 2 00:11:57.756 } 00:11:57.756 ], 00:11:57.756 "driver_specific": {} 00:11:57.756 } 00:11:57.757 ] 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.757 [2024-11-26 20:24:51.083992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.757 [2024-11-26 20:24:51.084053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.757 [2024-11-26 20:24:51.084081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.757 [2024-11-26 20:24:51.086217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.757 [2024-11-26 20:24:51.086274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.757 "name": "Existed_Raid", 00:11:57.757 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:11:57.757 "strip_size_kb": 0, 00:11:57.757 "state": "configuring", 00:11:57.757 "raid_level": "raid1", 00:11:57.757 "superblock": true, 00:11:57.757 "num_base_bdevs": 4, 00:11:57.757 "num_base_bdevs_discovered": 3, 00:11:57.757 "num_base_bdevs_operational": 4, 00:11:57.757 "base_bdevs_list": [ 00:11:57.757 { 00:11:57.757 "name": "BaseBdev1", 00:11:57.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.757 "is_configured": false, 00:11:57.757 "data_offset": 0, 00:11:57.757 "data_size": 0 00:11:57.757 }, 00:11:57.757 { 00:11:57.757 "name": "BaseBdev2", 00:11:57.757 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:11:57.757 "is_configured": true, 00:11:57.757 "data_offset": 2048, 00:11:57.757 "data_size": 63488 00:11:57.757 }, 00:11:57.757 { 00:11:57.757 "name": "BaseBdev3", 00:11:57.757 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:11:57.757 "is_configured": true, 00:11:57.757 "data_offset": 2048, 00:11:57.757 "data_size": 63488 00:11:57.757 }, 00:11:57.757 { 00:11:57.757 "name": "BaseBdev4", 00:11:57.757 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:11:57.757 "is_configured": true, 00:11:57.757 "data_offset": 2048, 00:11:57.757 "data_size": 63488 00:11:57.757 } 00:11:57.757 ] 00:11:57.757 }' 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.757 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.326 [2024-11-26 20:24:51.591100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.326 "name": "Existed_Raid", 00:11:58.326 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:11:58.326 "strip_size_kb": 0, 00:11:58.326 "state": "configuring", 00:11:58.326 "raid_level": "raid1", 00:11:58.326 "superblock": true, 00:11:58.326 "num_base_bdevs": 4, 00:11:58.326 "num_base_bdevs_discovered": 2, 00:11:58.326 "num_base_bdevs_operational": 4, 00:11:58.326 "base_bdevs_list": [ 00:11:58.326 { 00:11:58.326 "name": "BaseBdev1", 00:11:58.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.326 "is_configured": false, 00:11:58.326 "data_offset": 0, 00:11:58.326 "data_size": 0 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "name": null, 00:11:58.326 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:11:58.326 "is_configured": false, 00:11:58.326 "data_offset": 0, 00:11:58.326 "data_size": 63488 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "name": "BaseBdev3", 00:11:58.326 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:11:58.326 "is_configured": true, 00:11:58.326 "data_offset": 2048, 00:11:58.326 "data_size": 63488 00:11:58.326 }, 00:11:58.326 { 00:11:58.326 "name": "BaseBdev4", 00:11:58.326 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:11:58.326 "is_configured": true, 00:11:58.326 "data_offset": 2048, 00:11:58.326 "data_size": 63488 00:11:58.326 } 00:11:58.326 ] 00:11:58.326 }' 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.326 20:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.586 [2024-11-26 20:24:52.103815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.586 BaseBdev1 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.586 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.586 [ 00:11:58.586 { 00:11:58.586 "name": "BaseBdev1", 00:11:58.586 "aliases": [ 00:11:58.586 "4b339a31-a8e3-4c70-ba7a-94362a522e7d" 00:11:58.586 ], 00:11:58.586 "product_name": "Malloc disk", 00:11:58.586 "block_size": 512, 00:11:58.586 "num_blocks": 65536, 00:11:58.586 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:11:58.586 "assigned_rate_limits": { 00:11:58.586 "rw_ios_per_sec": 0, 00:11:58.586 "rw_mbytes_per_sec": 0, 00:11:58.586 "r_mbytes_per_sec": 0, 00:11:58.586 "w_mbytes_per_sec": 0 00:11:58.586 }, 00:11:58.586 "claimed": true, 00:11:58.586 "claim_type": "exclusive_write", 00:11:58.586 "zoned": false, 00:11:58.586 "supported_io_types": { 00:11:58.586 "read": true, 00:11:58.586 "write": true, 00:11:58.586 "unmap": true, 00:11:58.586 "flush": true, 00:11:58.586 "reset": true, 00:11:58.586 "nvme_admin": false, 00:11:58.586 "nvme_io": false, 00:11:58.586 "nvme_io_md": false, 00:11:58.586 "write_zeroes": true, 00:11:58.586 "zcopy": true, 00:11:58.586 "get_zone_info": false, 00:11:58.586 "zone_management": false, 00:11:58.586 "zone_append": false, 00:11:58.586 "compare": false, 00:11:58.586 "compare_and_write": false, 00:11:58.586 "abort": true, 00:11:58.586 "seek_hole": false, 00:11:58.586 "seek_data": false, 00:11:58.586 "copy": true, 00:11:58.887 "nvme_iov_md": false 00:11:58.887 }, 00:11:58.887 "memory_domains": [ 00:11:58.887 { 00:11:58.887 "dma_device_id": "system", 00:11:58.887 "dma_device_type": 1 00:11:58.887 }, 00:11:58.887 { 00:11:58.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.887 "dma_device_type": 2 00:11:58.887 } 00:11:58.887 ], 00:11:58.887 "driver_specific": {} 00:11:58.887 } 00:11:58.887 ] 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.887 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.887 "name": "Existed_Raid", 00:11:58.887 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:11:58.887 "strip_size_kb": 0, 00:11:58.887 "state": "configuring", 00:11:58.887 "raid_level": "raid1", 00:11:58.887 "superblock": true, 00:11:58.887 "num_base_bdevs": 4, 00:11:58.887 "num_base_bdevs_discovered": 3, 00:11:58.887 "num_base_bdevs_operational": 4, 00:11:58.887 "base_bdevs_list": [ 00:11:58.887 { 00:11:58.887 "name": "BaseBdev1", 00:11:58.887 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:11:58.887 "is_configured": true, 00:11:58.887 "data_offset": 2048, 00:11:58.887 "data_size": 63488 00:11:58.887 }, 00:11:58.887 { 00:11:58.887 "name": null, 00:11:58.887 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:11:58.887 "is_configured": false, 00:11:58.887 "data_offset": 0, 00:11:58.887 "data_size": 63488 00:11:58.887 }, 00:11:58.887 { 00:11:58.887 "name": "BaseBdev3", 00:11:58.887 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:11:58.888 "is_configured": true, 00:11:58.888 "data_offset": 2048, 00:11:58.888 "data_size": 63488 00:11:58.888 }, 00:11:58.888 { 00:11:58.888 "name": "BaseBdev4", 00:11:58.888 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:11:58.888 "is_configured": true, 00:11:58.888 "data_offset": 2048, 00:11:58.888 "data_size": 63488 00:11:58.888 } 00:11:58.888 ] 00:11:58.888 }' 00:11:58.888 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.888 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.172 [2024-11-26 20:24:52.626999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.172 "name": "Existed_Raid", 00:11:59.172 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:11:59.172 "strip_size_kb": 0, 00:11:59.172 "state": "configuring", 00:11:59.172 "raid_level": "raid1", 00:11:59.172 "superblock": true, 00:11:59.172 "num_base_bdevs": 4, 00:11:59.172 "num_base_bdevs_discovered": 2, 00:11:59.172 "num_base_bdevs_operational": 4, 00:11:59.172 "base_bdevs_list": [ 00:11:59.172 { 00:11:59.172 "name": "BaseBdev1", 00:11:59.172 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:11:59.172 "is_configured": true, 00:11:59.172 "data_offset": 2048, 00:11:59.172 "data_size": 63488 00:11:59.172 }, 00:11:59.172 { 00:11:59.172 "name": null, 00:11:59.172 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:11:59.172 "is_configured": false, 00:11:59.172 "data_offset": 0, 00:11:59.172 "data_size": 63488 00:11:59.172 }, 00:11:59.172 { 00:11:59.172 "name": null, 00:11:59.172 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:11:59.172 "is_configured": false, 00:11:59.172 "data_offset": 0, 00:11:59.172 "data_size": 63488 00:11:59.172 }, 00:11:59.172 { 00:11:59.172 "name": "BaseBdev4", 00:11:59.172 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:11:59.172 "is_configured": true, 00:11:59.172 "data_offset": 2048, 00:11:59.172 "data_size": 63488 00:11:59.172 } 00:11:59.172 ] 00:11:59.172 }' 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.172 20:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.740 [2024-11-26 20:24:53.106236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.740 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.741 "name": "Existed_Raid", 00:11:59.741 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:11:59.741 "strip_size_kb": 0, 00:11:59.741 "state": "configuring", 00:11:59.741 "raid_level": "raid1", 00:11:59.741 "superblock": true, 00:11:59.741 "num_base_bdevs": 4, 00:11:59.741 "num_base_bdevs_discovered": 3, 00:11:59.741 "num_base_bdevs_operational": 4, 00:11:59.741 "base_bdevs_list": [ 00:11:59.741 { 00:11:59.741 "name": "BaseBdev1", 00:11:59.741 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:11:59.741 "is_configured": true, 00:11:59.741 "data_offset": 2048, 00:11:59.741 "data_size": 63488 00:11:59.741 }, 00:11:59.741 { 00:11:59.741 "name": null, 00:11:59.741 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:11:59.741 "is_configured": false, 00:11:59.741 "data_offset": 0, 00:11:59.741 "data_size": 63488 00:11:59.741 }, 00:11:59.741 { 00:11:59.741 "name": "BaseBdev3", 00:11:59.741 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:11:59.741 "is_configured": true, 00:11:59.741 "data_offset": 2048, 00:11:59.741 "data_size": 63488 00:11:59.741 }, 00:11:59.741 { 00:11:59.741 "name": "BaseBdev4", 00:11:59.741 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:11:59.741 "is_configured": true, 00:11:59.741 "data_offset": 2048, 00:11:59.741 "data_size": 63488 00:11:59.741 } 00:11:59.741 ] 00:11:59.741 }' 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.741 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.309 [2024-11-26 20:24:53.673328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.309 "name": "Existed_Raid", 00:12:00.309 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:12:00.309 "strip_size_kb": 0, 00:12:00.309 "state": "configuring", 00:12:00.309 "raid_level": "raid1", 00:12:00.309 "superblock": true, 00:12:00.309 "num_base_bdevs": 4, 00:12:00.309 "num_base_bdevs_discovered": 2, 00:12:00.309 "num_base_bdevs_operational": 4, 00:12:00.309 "base_bdevs_list": [ 00:12:00.309 { 00:12:00.309 "name": null, 00:12:00.309 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:12:00.309 "is_configured": false, 00:12:00.309 "data_offset": 0, 00:12:00.309 "data_size": 63488 00:12:00.309 }, 00:12:00.309 { 00:12:00.309 "name": null, 00:12:00.309 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:12:00.309 "is_configured": false, 00:12:00.309 "data_offset": 0, 00:12:00.309 "data_size": 63488 00:12:00.309 }, 00:12:00.309 { 00:12:00.309 "name": "BaseBdev3", 00:12:00.309 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:12:00.309 "is_configured": true, 00:12:00.309 "data_offset": 2048, 00:12:00.309 "data_size": 63488 00:12:00.309 }, 00:12:00.309 { 00:12:00.309 "name": "BaseBdev4", 00:12:00.309 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:12:00.309 "is_configured": true, 00:12:00.309 "data_offset": 2048, 00:12:00.309 "data_size": 63488 00:12:00.309 } 00:12:00.309 ] 00:12:00.309 }' 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.309 20:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.877 [2024-11-26 20:24:54.207179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.877 "name": "Existed_Raid", 00:12:00.877 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:12:00.877 "strip_size_kb": 0, 00:12:00.877 "state": "configuring", 00:12:00.877 "raid_level": "raid1", 00:12:00.877 "superblock": true, 00:12:00.877 "num_base_bdevs": 4, 00:12:00.877 "num_base_bdevs_discovered": 3, 00:12:00.877 "num_base_bdevs_operational": 4, 00:12:00.877 "base_bdevs_list": [ 00:12:00.877 { 00:12:00.877 "name": null, 00:12:00.877 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:12:00.877 "is_configured": false, 00:12:00.877 "data_offset": 0, 00:12:00.877 "data_size": 63488 00:12:00.877 }, 00:12:00.877 { 00:12:00.877 "name": "BaseBdev2", 00:12:00.877 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:12:00.877 "is_configured": true, 00:12:00.877 "data_offset": 2048, 00:12:00.877 "data_size": 63488 00:12:00.877 }, 00:12:00.877 { 00:12:00.877 "name": "BaseBdev3", 00:12:00.877 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:12:00.877 "is_configured": true, 00:12:00.877 "data_offset": 2048, 00:12:00.877 "data_size": 63488 00:12:00.877 }, 00:12:00.877 { 00:12:00.877 "name": "BaseBdev4", 00:12:00.877 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:12:00.877 "is_configured": true, 00:12:00.877 "data_offset": 2048, 00:12:00.877 "data_size": 63488 00:12:00.877 } 00:12:00.877 ] 00:12:00.877 }' 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.877 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4b339a31-a8e3-4c70-ba7a-94362a522e7d 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.445 [2024-11-26 20:24:54.807760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:01.445 [2024-11-26 20:24:54.807961] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:01.445 [2024-11-26 20:24:54.807978] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.445 [2024-11-26 20:24:54.808260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:01.445 [2024-11-26 20:24:54.808415] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:01.445 [2024-11-26 20:24:54.808426] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:01.445 [2024-11-26 20:24:54.808535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.445 NewBaseBdev 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.445 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.446 [ 00:12:01.446 { 00:12:01.446 "name": "NewBaseBdev", 00:12:01.446 "aliases": [ 00:12:01.446 "4b339a31-a8e3-4c70-ba7a-94362a522e7d" 00:12:01.446 ], 00:12:01.446 "product_name": "Malloc disk", 00:12:01.446 "block_size": 512, 00:12:01.446 "num_blocks": 65536, 00:12:01.446 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:12:01.446 "assigned_rate_limits": { 00:12:01.446 "rw_ios_per_sec": 0, 00:12:01.446 "rw_mbytes_per_sec": 0, 00:12:01.446 "r_mbytes_per_sec": 0, 00:12:01.446 "w_mbytes_per_sec": 0 00:12:01.446 }, 00:12:01.446 "claimed": true, 00:12:01.446 "claim_type": "exclusive_write", 00:12:01.446 "zoned": false, 00:12:01.446 "supported_io_types": { 00:12:01.446 "read": true, 00:12:01.446 "write": true, 00:12:01.446 "unmap": true, 00:12:01.446 "flush": true, 00:12:01.446 "reset": true, 00:12:01.446 "nvme_admin": false, 00:12:01.446 "nvme_io": false, 00:12:01.446 "nvme_io_md": false, 00:12:01.446 "write_zeroes": true, 00:12:01.446 "zcopy": true, 00:12:01.446 "get_zone_info": false, 00:12:01.446 "zone_management": false, 00:12:01.446 "zone_append": false, 00:12:01.446 "compare": false, 00:12:01.446 "compare_and_write": false, 00:12:01.446 "abort": true, 00:12:01.446 "seek_hole": false, 00:12:01.446 "seek_data": false, 00:12:01.446 "copy": true, 00:12:01.446 "nvme_iov_md": false 00:12:01.446 }, 00:12:01.446 "memory_domains": [ 00:12:01.446 { 00:12:01.446 "dma_device_id": "system", 00:12:01.446 "dma_device_type": 1 00:12:01.446 }, 00:12:01.446 { 00:12:01.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.446 "dma_device_type": 2 00:12:01.446 } 00:12:01.446 ], 00:12:01.446 "driver_specific": {} 00:12:01.446 } 00:12:01.446 ] 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.446 "name": "Existed_Raid", 00:12:01.446 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:12:01.446 "strip_size_kb": 0, 00:12:01.446 "state": "online", 00:12:01.446 "raid_level": "raid1", 00:12:01.446 "superblock": true, 00:12:01.446 "num_base_bdevs": 4, 00:12:01.446 "num_base_bdevs_discovered": 4, 00:12:01.446 "num_base_bdevs_operational": 4, 00:12:01.446 "base_bdevs_list": [ 00:12:01.446 { 00:12:01.446 "name": "NewBaseBdev", 00:12:01.446 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:12:01.446 "is_configured": true, 00:12:01.446 "data_offset": 2048, 00:12:01.446 "data_size": 63488 00:12:01.446 }, 00:12:01.446 { 00:12:01.446 "name": "BaseBdev2", 00:12:01.446 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:12:01.446 "is_configured": true, 00:12:01.446 "data_offset": 2048, 00:12:01.446 "data_size": 63488 00:12:01.446 }, 00:12:01.446 { 00:12:01.446 "name": "BaseBdev3", 00:12:01.446 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:12:01.446 "is_configured": true, 00:12:01.446 "data_offset": 2048, 00:12:01.446 "data_size": 63488 00:12:01.446 }, 00:12:01.446 { 00:12:01.446 "name": "BaseBdev4", 00:12:01.446 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:12:01.446 "is_configured": true, 00:12:01.446 "data_offset": 2048, 00:12:01.446 "data_size": 63488 00:12:01.446 } 00:12:01.446 ] 00:12:01.446 }' 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.446 20:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.014 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:02.014 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.015 [2024-11-26 20:24:55.287397] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.015 "name": "Existed_Raid", 00:12:02.015 "aliases": [ 00:12:02.015 "56603c74-f6b1-4984-85fb-07fbf30734c6" 00:12:02.015 ], 00:12:02.015 "product_name": "Raid Volume", 00:12:02.015 "block_size": 512, 00:12:02.015 "num_blocks": 63488, 00:12:02.015 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:12:02.015 "assigned_rate_limits": { 00:12:02.015 "rw_ios_per_sec": 0, 00:12:02.015 "rw_mbytes_per_sec": 0, 00:12:02.015 "r_mbytes_per_sec": 0, 00:12:02.015 "w_mbytes_per_sec": 0 00:12:02.015 }, 00:12:02.015 "claimed": false, 00:12:02.015 "zoned": false, 00:12:02.015 "supported_io_types": { 00:12:02.015 "read": true, 00:12:02.015 "write": true, 00:12:02.015 "unmap": false, 00:12:02.015 "flush": false, 00:12:02.015 "reset": true, 00:12:02.015 "nvme_admin": false, 00:12:02.015 "nvme_io": false, 00:12:02.015 "nvme_io_md": false, 00:12:02.015 "write_zeroes": true, 00:12:02.015 "zcopy": false, 00:12:02.015 "get_zone_info": false, 00:12:02.015 "zone_management": false, 00:12:02.015 "zone_append": false, 00:12:02.015 "compare": false, 00:12:02.015 "compare_and_write": false, 00:12:02.015 "abort": false, 00:12:02.015 "seek_hole": false, 00:12:02.015 "seek_data": false, 00:12:02.015 "copy": false, 00:12:02.015 "nvme_iov_md": false 00:12:02.015 }, 00:12:02.015 "memory_domains": [ 00:12:02.015 { 00:12:02.015 "dma_device_id": "system", 00:12:02.015 "dma_device_type": 1 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.015 "dma_device_type": 2 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "dma_device_id": "system", 00:12:02.015 "dma_device_type": 1 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.015 "dma_device_type": 2 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "dma_device_id": "system", 00:12:02.015 "dma_device_type": 1 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.015 "dma_device_type": 2 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "dma_device_id": "system", 00:12:02.015 "dma_device_type": 1 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.015 "dma_device_type": 2 00:12:02.015 } 00:12:02.015 ], 00:12:02.015 "driver_specific": { 00:12:02.015 "raid": { 00:12:02.015 "uuid": "56603c74-f6b1-4984-85fb-07fbf30734c6", 00:12:02.015 "strip_size_kb": 0, 00:12:02.015 "state": "online", 00:12:02.015 "raid_level": "raid1", 00:12:02.015 "superblock": true, 00:12:02.015 "num_base_bdevs": 4, 00:12:02.015 "num_base_bdevs_discovered": 4, 00:12:02.015 "num_base_bdevs_operational": 4, 00:12:02.015 "base_bdevs_list": [ 00:12:02.015 { 00:12:02.015 "name": "NewBaseBdev", 00:12:02.015 "uuid": "4b339a31-a8e3-4c70-ba7a-94362a522e7d", 00:12:02.015 "is_configured": true, 00:12:02.015 "data_offset": 2048, 00:12:02.015 "data_size": 63488 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "name": "BaseBdev2", 00:12:02.015 "uuid": "1d7b6dd7-225a-4a26-a120-da1b9a52bc3d", 00:12:02.015 "is_configured": true, 00:12:02.015 "data_offset": 2048, 00:12:02.015 "data_size": 63488 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "name": "BaseBdev3", 00:12:02.015 "uuid": "096c4927-1656-46a6-8db1-2df9bcfbf567", 00:12:02.015 "is_configured": true, 00:12:02.015 "data_offset": 2048, 00:12:02.015 "data_size": 63488 00:12:02.015 }, 00:12:02.015 { 00:12:02.015 "name": "BaseBdev4", 00:12:02.015 "uuid": "c8a8701f-f37c-474d-a19f-9436b99b236f", 00:12:02.015 "is_configured": true, 00:12:02.015 "data_offset": 2048, 00:12:02.015 "data_size": 63488 00:12:02.015 } 00:12:02.015 ] 00:12:02.015 } 00:12:02.015 } 00:12:02.015 }' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:02.015 BaseBdev2 00:12:02.015 BaseBdev3 00:12:02.015 BaseBdev4' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.015 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.274 [2024-11-26 20:24:55.626466] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.274 [2024-11-26 20:24:55.626501] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.274 [2024-11-26 20:24:55.626598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.274 [2024-11-26 20:24:55.626928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.274 [2024-11-26 20:24:55.626959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 85133 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85133 ']' 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 85133 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:02.274 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.275 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85133 00:12:02.275 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.275 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.275 killing process with pid 85133 00:12:02.275 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85133' 00:12:02.275 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 85133 00:12:02.275 [2024-11-26 20:24:55.669760] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.275 20:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 85133 00:12:02.275 [2024-11-26 20:24:55.738921] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.842 20:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:02.842 00:12:02.842 real 0m10.088s 00:12:02.842 user 0m17.144s 00:12:02.842 sys 0m2.114s 00:12:02.842 20:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.842 20:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.842 ************************************ 00:12:02.842 END TEST raid_state_function_test_sb 00:12:02.842 ************************************ 00:12:02.842 20:24:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:02.842 20:24:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:02.842 20:24:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.842 20:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.842 ************************************ 00:12:02.842 START TEST raid_superblock_test 00:12:02.842 ************************************ 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85791 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85791 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85791 ']' 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.842 20:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.842 [2024-11-26 20:24:56.249956] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:02.842 [2024-11-26 20:24:56.250094] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85791 ] 00:12:03.132 [2024-11-26 20:24:56.411599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.132 [2024-11-26 20:24:56.495178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.132 [2024-11-26 20:24:56.574029] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.132 [2024-11-26 20:24:56.574075] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.710 malloc1 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.710 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.711 [2024-11-26 20:24:57.194508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.711 [2024-11-26 20:24:57.194603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.711 [2024-11-26 20:24:57.194658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.711 [2024-11-26 20:24:57.194676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.711 [2024-11-26 20:24:57.197041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.711 [2024-11-26 20:24:57.197090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.711 pt1 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.711 malloc2 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.711 [2024-11-26 20:24:57.234519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.711 [2024-11-26 20:24:57.234601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.711 [2024-11-26 20:24:57.234640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.711 [2024-11-26 20:24:57.234656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.711 [2024-11-26 20:24:57.237487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.711 [2024-11-26 20:24:57.237525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.711 pt2 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.711 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.971 malloc3 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.971 [2024-11-26 20:24:57.270032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:03.971 [2024-11-26 20:24:57.270090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.971 [2024-11-26 20:24:57.270111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:03.971 [2024-11-26 20:24:57.270123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.971 [2024-11-26 20:24:57.272517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.971 [2024-11-26 20:24:57.272553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:03.971 pt3 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.971 malloc4 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.971 [2024-11-26 20:24:57.301620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:03.971 [2024-11-26 20:24:57.301681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.971 [2024-11-26 20:24:57.301698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:03.971 [2024-11-26 20:24:57.301713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.971 [2024-11-26 20:24:57.304018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.971 [2024-11-26 20:24:57.304053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:03.971 pt4 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.971 [2024-11-26 20:24:57.313714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.971 [2024-11-26 20:24:57.315724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.971 [2024-11-26 20:24:57.315785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:03.971 [2024-11-26 20:24:57.315827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:03.971 [2024-11-26 20:24:57.315979] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:03.971 [2024-11-26 20:24:57.315998] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:03.971 [2024-11-26 20:24:57.316285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:03.971 [2024-11-26 20:24:57.316455] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:03.971 [2024-11-26 20:24:57.316481] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:03.971 [2024-11-26 20:24:57.316673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.971 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.971 "name": "raid_bdev1", 00:12:03.971 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:03.971 "strip_size_kb": 0, 00:12:03.971 "state": "online", 00:12:03.971 "raid_level": "raid1", 00:12:03.971 "superblock": true, 00:12:03.971 "num_base_bdevs": 4, 00:12:03.971 "num_base_bdevs_discovered": 4, 00:12:03.971 "num_base_bdevs_operational": 4, 00:12:03.971 "base_bdevs_list": [ 00:12:03.972 { 00:12:03.972 "name": "pt1", 00:12:03.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.972 "is_configured": true, 00:12:03.972 "data_offset": 2048, 00:12:03.972 "data_size": 63488 00:12:03.972 }, 00:12:03.972 { 00:12:03.972 "name": "pt2", 00:12:03.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.972 "is_configured": true, 00:12:03.972 "data_offset": 2048, 00:12:03.972 "data_size": 63488 00:12:03.972 }, 00:12:03.972 { 00:12:03.972 "name": "pt3", 00:12:03.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.972 "is_configured": true, 00:12:03.972 "data_offset": 2048, 00:12:03.972 "data_size": 63488 00:12:03.972 }, 00:12:03.972 { 00:12:03.972 "name": "pt4", 00:12:03.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.972 "is_configured": true, 00:12:03.972 "data_offset": 2048, 00:12:03.972 "data_size": 63488 00:12:03.972 } 00:12:03.972 ] 00:12:03.972 }' 00:12:03.972 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.972 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.540 [2024-11-26 20:24:57.797237] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.540 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:04.540 "name": "raid_bdev1", 00:12:04.540 "aliases": [ 00:12:04.540 "d274f6e7-555c-41b5-88a5-da435ad61efb" 00:12:04.540 ], 00:12:04.540 "product_name": "Raid Volume", 00:12:04.540 "block_size": 512, 00:12:04.540 "num_blocks": 63488, 00:12:04.540 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:04.540 "assigned_rate_limits": { 00:12:04.540 "rw_ios_per_sec": 0, 00:12:04.540 "rw_mbytes_per_sec": 0, 00:12:04.540 "r_mbytes_per_sec": 0, 00:12:04.540 "w_mbytes_per_sec": 0 00:12:04.540 }, 00:12:04.540 "claimed": false, 00:12:04.540 "zoned": false, 00:12:04.540 "supported_io_types": { 00:12:04.540 "read": true, 00:12:04.540 "write": true, 00:12:04.540 "unmap": false, 00:12:04.540 "flush": false, 00:12:04.540 "reset": true, 00:12:04.540 "nvme_admin": false, 00:12:04.540 "nvme_io": false, 00:12:04.540 "nvme_io_md": false, 00:12:04.540 "write_zeroes": true, 00:12:04.540 "zcopy": false, 00:12:04.540 "get_zone_info": false, 00:12:04.540 "zone_management": false, 00:12:04.540 "zone_append": false, 00:12:04.541 "compare": false, 00:12:04.541 "compare_and_write": false, 00:12:04.541 "abort": false, 00:12:04.541 "seek_hole": false, 00:12:04.541 "seek_data": false, 00:12:04.541 "copy": false, 00:12:04.541 "nvme_iov_md": false 00:12:04.541 }, 00:12:04.541 "memory_domains": [ 00:12:04.541 { 00:12:04.541 "dma_device_id": "system", 00:12:04.541 "dma_device_type": 1 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.541 "dma_device_type": 2 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "dma_device_id": "system", 00:12:04.541 "dma_device_type": 1 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.541 "dma_device_type": 2 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "dma_device_id": "system", 00:12:04.541 "dma_device_type": 1 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.541 "dma_device_type": 2 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "dma_device_id": "system", 00:12:04.541 "dma_device_type": 1 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.541 "dma_device_type": 2 00:12:04.541 } 00:12:04.541 ], 00:12:04.541 "driver_specific": { 00:12:04.541 "raid": { 00:12:04.541 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:04.541 "strip_size_kb": 0, 00:12:04.541 "state": "online", 00:12:04.541 "raid_level": "raid1", 00:12:04.541 "superblock": true, 00:12:04.541 "num_base_bdevs": 4, 00:12:04.541 "num_base_bdevs_discovered": 4, 00:12:04.541 "num_base_bdevs_operational": 4, 00:12:04.541 "base_bdevs_list": [ 00:12:04.541 { 00:12:04.541 "name": "pt1", 00:12:04.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.541 "is_configured": true, 00:12:04.541 "data_offset": 2048, 00:12:04.541 "data_size": 63488 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "name": "pt2", 00:12:04.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.541 "is_configured": true, 00:12:04.541 "data_offset": 2048, 00:12:04.541 "data_size": 63488 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "name": "pt3", 00:12:04.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.541 "is_configured": true, 00:12:04.541 "data_offset": 2048, 00:12:04.541 "data_size": 63488 00:12:04.541 }, 00:12:04.541 { 00:12:04.541 "name": "pt4", 00:12:04.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.541 "is_configured": true, 00:12:04.541 "data_offset": 2048, 00:12:04.541 "data_size": 63488 00:12:04.541 } 00:12:04.541 ] 00:12:04.541 } 00:12:04.541 } 00:12:04.541 }' 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:04.541 pt2 00:12:04.541 pt3 00:12:04.541 pt4' 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.541 20:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.541 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 [2024-11-26 20:24:58.145167] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d274f6e7-555c-41b5-88a5-da435ad61efb 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d274f6e7-555c-41b5-88a5-da435ad61efb ']' 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 [2024-11-26 20:24:58.176786] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.799 [2024-11-26 20:24:58.176820] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.799 [2024-11-26 20:24:58.176908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.799 [2024-11-26 20:24:58.177009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.799 [2024-11-26 20:24:58.177025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.799 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.799 [2024-11-26 20:24:58.324830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:04.799 [2024-11-26 20:24:58.327024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:04.800 [2024-11-26 20:24:58.327090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:04.800 [2024-11-26 20:24:58.327126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:04.800 [2024-11-26 20:24:58.327180] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:04.800 [2024-11-26 20:24:58.327247] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:04.800 [2024-11-26 20:24:58.327274] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:04.800 [2024-11-26 20:24:58.327294] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:04.800 [2024-11-26 20:24:58.327313] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.800 [2024-11-26 20:24:58.327324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:12:04.800 request: 00:12:04.800 { 00:12:04.800 "name": "raid_bdev1", 00:12:04.800 "raid_level": "raid1", 00:12:04.800 "base_bdevs": [ 00:12:04.800 "malloc1", 00:12:04.800 "malloc2", 00:12:04.800 "malloc3", 00:12:04.800 "malloc4" 00:12:04.800 ], 00:12:04.800 "superblock": false, 00:12:04.800 "method": "bdev_raid_create", 00:12:04.800 "req_id": 1 00:12:04.800 } 00:12:04.800 Got JSON-RPC error response 00:12:04.800 response: 00:12:04.800 { 00:12:04.800 "code": -17, 00:12:04.800 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:04.800 } 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.800 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.057 [2024-11-26 20:24:58.392775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:05.057 [2024-11-26 20:24:58.392839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.057 [2024-11-26 20:24:58.392863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:05.057 [2024-11-26 20:24:58.392874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.057 [2024-11-26 20:24:58.395359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.057 [2024-11-26 20:24:58.395395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:05.057 [2024-11-26 20:24:58.395481] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:05.057 [2024-11-26 20:24:58.395538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:05.057 pt1 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.057 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.058 "name": "raid_bdev1", 00:12:05.058 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:05.058 "strip_size_kb": 0, 00:12:05.058 "state": "configuring", 00:12:05.058 "raid_level": "raid1", 00:12:05.058 "superblock": true, 00:12:05.058 "num_base_bdevs": 4, 00:12:05.058 "num_base_bdevs_discovered": 1, 00:12:05.058 "num_base_bdevs_operational": 4, 00:12:05.058 "base_bdevs_list": [ 00:12:05.058 { 00:12:05.058 "name": "pt1", 00:12:05.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.058 "is_configured": true, 00:12:05.058 "data_offset": 2048, 00:12:05.058 "data_size": 63488 00:12:05.058 }, 00:12:05.058 { 00:12:05.058 "name": null, 00:12:05.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.058 "is_configured": false, 00:12:05.058 "data_offset": 2048, 00:12:05.058 "data_size": 63488 00:12:05.058 }, 00:12:05.058 { 00:12:05.058 "name": null, 00:12:05.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.058 "is_configured": false, 00:12:05.058 "data_offset": 2048, 00:12:05.058 "data_size": 63488 00:12:05.058 }, 00:12:05.058 { 00:12:05.058 "name": null, 00:12:05.058 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.058 "is_configured": false, 00:12:05.058 "data_offset": 2048, 00:12:05.058 "data_size": 63488 00:12:05.058 } 00:12:05.058 ] 00:12:05.058 }' 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.058 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.315 [2024-11-26 20:24:58.812783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.315 [2024-11-26 20:24:58.812922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.315 [2024-11-26 20:24:58.812952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:05.315 [2024-11-26 20:24:58.812962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.315 [2024-11-26 20:24:58.813406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.315 [2024-11-26 20:24:58.813426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.315 [2024-11-26 20:24:58.813508] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:05.315 [2024-11-26 20:24:58.813538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.315 pt2 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.315 [2024-11-26 20:24:58.824821] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.315 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.574 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.574 "name": "raid_bdev1", 00:12:05.574 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:05.574 "strip_size_kb": 0, 00:12:05.574 "state": "configuring", 00:12:05.574 "raid_level": "raid1", 00:12:05.574 "superblock": true, 00:12:05.574 "num_base_bdevs": 4, 00:12:05.574 "num_base_bdevs_discovered": 1, 00:12:05.574 "num_base_bdevs_operational": 4, 00:12:05.574 "base_bdevs_list": [ 00:12:05.574 { 00:12:05.574 "name": "pt1", 00:12:05.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.574 "is_configured": true, 00:12:05.574 "data_offset": 2048, 00:12:05.574 "data_size": 63488 00:12:05.574 }, 00:12:05.574 { 00:12:05.574 "name": null, 00:12:05.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.574 "is_configured": false, 00:12:05.574 "data_offset": 0, 00:12:05.574 "data_size": 63488 00:12:05.574 }, 00:12:05.574 { 00:12:05.574 "name": null, 00:12:05.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.574 "is_configured": false, 00:12:05.574 "data_offset": 2048, 00:12:05.574 "data_size": 63488 00:12:05.574 }, 00:12:05.574 { 00:12:05.574 "name": null, 00:12:05.574 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.574 "is_configured": false, 00:12:05.574 "data_offset": 2048, 00:12:05.574 "data_size": 63488 00:12:05.574 } 00:12:05.574 ] 00:12:05.574 }' 00:12:05.574 20:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.574 20:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.833 [2024-11-26 20:24:59.272807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.833 [2024-11-26 20:24:59.272882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.833 [2024-11-26 20:24:59.272902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:05.833 [2024-11-26 20:24:59.272915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.833 [2024-11-26 20:24:59.273344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.833 [2024-11-26 20:24:59.273375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.833 [2024-11-26 20:24:59.273456] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:05.833 [2024-11-26 20:24:59.273490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.833 pt2 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.833 [2024-11-26 20:24:59.284746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:05.833 [2024-11-26 20:24:59.284812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.833 [2024-11-26 20:24:59.284833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:05.833 [2024-11-26 20:24:59.284844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.833 [2024-11-26 20:24:59.285248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.833 [2024-11-26 20:24:59.285277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:05.833 [2024-11-26 20:24:59.285347] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:05.833 [2024-11-26 20:24:59.285370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:05.833 pt3 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.833 [2024-11-26 20:24:59.296732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:05.833 [2024-11-26 20:24:59.296784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.833 [2024-11-26 20:24:59.296801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:05.833 [2024-11-26 20:24:59.296810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.833 [2024-11-26 20:24:59.297179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.833 [2024-11-26 20:24:59.297208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:05.833 [2024-11-26 20:24:59.297270] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:05.833 [2024-11-26 20:24:59.297292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:05.833 [2024-11-26 20:24:59.297406] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:05.833 [2024-11-26 20:24:59.297424] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.833 [2024-11-26 20:24:59.297697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:05.833 [2024-11-26 20:24:59.297843] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:05.833 [2024-11-26 20:24:59.297860] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:12:05.833 [2024-11-26 20:24:59.297984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.833 pt4 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.833 "name": "raid_bdev1", 00:12:05.833 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:05.833 "strip_size_kb": 0, 00:12:05.833 "state": "online", 00:12:05.833 "raid_level": "raid1", 00:12:05.833 "superblock": true, 00:12:05.833 "num_base_bdevs": 4, 00:12:05.833 "num_base_bdevs_discovered": 4, 00:12:05.833 "num_base_bdevs_operational": 4, 00:12:05.833 "base_bdevs_list": [ 00:12:05.833 { 00:12:05.833 "name": "pt1", 00:12:05.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.833 "is_configured": true, 00:12:05.833 "data_offset": 2048, 00:12:05.833 "data_size": 63488 00:12:05.833 }, 00:12:05.833 { 00:12:05.833 "name": "pt2", 00:12:05.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.833 "is_configured": true, 00:12:05.833 "data_offset": 2048, 00:12:05.833 "data_size": 63488 00:12:05.833 }, 00:12:05.833 { 00:12:05.833 "name": "pt3", 00:12:05.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.833 "is_configured": true, 00:12:05.833 "data_offset": 2048, 00:12:05.833 "data_size": 63488 00:12:05.833 }, 00:12:05.833 { 00:12:05.833 "name": "pt4", 00:12:05.833 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.833 "is_configured": true, 00:12:05.833 "data_offset": 2048, 00:12:05.833 "data_size": 63488 00:12:05.833 } 00:12:05.833 ] 00:12:05.833 }' 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.833 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.399 [2024-11-26 20:24:59.789179] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.399 "name": "raid_bdev1", 00:12:06.399 "aliases": [ 00:12:06.399 "d274f6e7-555c-41b5-88a5-da435ad61efb" 00:12:06.399 ], 00:12:06.399 "product_name": "Raid Volume", 00:12:06.399 "block_size": 512, 00:12:06.399 "num_blocks": 63488, 00:12:06.399 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:06.399 "assigned_rate_limits": { 00:12:06.399 "rw_ios_per_sec": 0, 00:12:06.399 "rw_mbytes_per_sec": 0, 00:12:06.399 "r_mbytes_per_sec": 0, 00:12:06.399 "w_mbytes_per_sec": 0 00:12:06.399 }, 00:12:06.399 "claimed": false, 00:12:06.399 "zoned": false, 00:12:06.399 "supported_io_types": { 00:12:06.399 "read": true, 00:12:06.399 "write": true, 00:12:06.399 "unmap": false, 00:12:06.399 "flush": false, 00:12:06.399 "reset": true, 00:12:06.399 "nvme_admin": false, 00:12:06.399 "nvme_io": false, 00:12:06.399 "nvme_io_md": false, 00:12:06.399 "write_zeroes": true, 00:12:06.399 "zcopy": false, 00:12:06.399 "get_zone_info": false, 00:12:06.399 "zone_management": false, 00:12:06.399 "zone_append": false, 00:12:06.399 "compare": false, 00:12:06.399 "compare_and_write": false, 00:12:06.399 "abort": false, 00:12:06.399 "seek_hole": false, 00:12:06.399 "seek_data": false, 00:12:06.399 "copy": false, 00:12:06.399 "nvme_iov_md": false 00:12:06.399 }, 00:12:06.399 "memory_domains": [ 00:12:06.399 { 00:12:06.399 "dma_device_id": "system", 00:12:06.399 "dma_device_type": 1 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.399 "dma_device_type": 2 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "dma_device_id": "system", 00:12:06.399 "dma_device_type": 1 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.399 "dma_device_type": 2 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "dma_device_id": "system", 00:12:06.399 "dma_device_type": 1 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.399 "dma_device_type": 2 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "dma_device_id": "system", 00:12:06.399 "dma_device_type": 1 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.399 "dma_device_type": 2 00:12:06.399 } 00:12:06.399 ], 00:12:06.399 "driver_specific": { 00:12:06.399 "raid": { 00:12:06.399 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:06.399 "strip_size_kb": 0, 00:12:06.399 "state": "online", 00:12:06.399 "raid_level": "raid1", 00:12:06.399 "superblock": true, 00:12:06.399 "num_base_bdevs": 4, 00:12:06.399 "num_base_bdevs_discovered": 4, 00:12:06.399 "num_base_bdevs_operational": 4, 00:12:06.399 "base_bdevs_list": [ 00:12:06.399 { 00:12:06.399 "name": "pt1", 00:12:06.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.399 "is_configured": true, 00:12:06.399 "data_offset": 2048, 00:12:06.399 "data_size": 63488 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "name": "pt2", 00:12:06.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.399 "is_configured": true, 00:12:06.399 "data_offset": 2048, 00:12:06.399 "data_size": 63488 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "name": "pt3", 00:12:06.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.399 "is_configured": true, 00:12:06.399 "data_offset": 2048, 00:12:06.399 "data_size": 63488 00:12:06.399 }, 00:12:06.399 { 00:12:06.399 "name": "pt4", 00:12:06.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.399 "is_configured": true, 00:12:06.399 "data_offset": 2048, 00:12:06.399 "data_size": 63488 00:12:06.399 } 00:12:06.399 ] 00:12:06.399 } 00:12:06.399 } 00:12:06.399 }' 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:06.399 pt2 00:12:06.399 pt3 00:12:06.399 pt4' 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.399 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.657 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.657 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.657 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.657 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.657 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:06.657 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.657 20:24:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.657 20:24:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.657 [2024-11-26 20:25:00.129130] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d274f6e7-555c-41b5-88a5-da435ad61efb '!=' d274f6e7-555c-41b5-88a5-da435ad61efb ']' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.657 [2024-11-26 20:25:00.172832] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.657 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.916 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.916 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.916 "name": "raid_bdev1", 00:12:06.916 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:06.916 "strip_size_kb": 0, 00:12:06.916 "state": "online", 00:12:06.916 "raid_level": "raid1", 00:12:06.916 "superblock": true, 00:12:06.916 "num_base_bdevs": 4, 00:12:06.916 "num_base_bdevs_discovered": 3, 00:12:06.916 "num_base_bdevs_operational": 3, 00:12:06.916 "base_bdevs_list": [ 00:12:06.916 { 00:12:06.916 "name": null, 00:12:06.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.916 "is_configured": false, 00:12:06.916 "data_offset": 0, 00:12:06.916 "data_size": 63488 00:12:06.916 }, 00:12:06.916 { 00:12:06.916 "name": "pt2", 00:12:06.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.916 "is_configured": true, 00:12:06.916 "data_offset": 2048, 00:12:06.916 "data_size": 63488 00:12:06.916 }, 00:12:06.916 { 00:12:06.916 "name": "pt3", 00:12:06.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.916 "is_configured": true, 00:12:06.916 "data_offset": 2048, 00:12:06.916 "data_size": 63488 00:12:06.916 }, 00:12:06.916 { 00:12:06.916 "name": "pt4", 00:12:06.916 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.916 "is_configured": true, 00:12:06.916 "data_offset": 2048, 00:12:06.916 "data_size": 63488 00:12:06.916 } 00:12:06.916 ] 00:12:06.916 }' 00:12:06.916 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.916 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.175 [2024-11-26 20:25:00.632747] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.175 [2024-11-26 20:25:00.632782] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.175 [2024-11-26 20:25:00.632877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.175 [2024-11-26 20:25:00.632959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.175 [2024-11-26 20:25:00.632977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.175 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.437 [2024-11-26 20:25:00.732736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:07.437 [2024-11-26 20:25:00.732808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.437 [2024-11-26 20:25:00.732829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:07.437 [2024-11-26 20:25:00.732842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.437 [2024-11-26 20:25:00.735288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.437 [2024-11-26 20:25:00.735332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:07.437 [2024-11-26 20:25:00.735405] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:07.437 [2024-11-26 20:25:00.735442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:07.437 pt2 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.437 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.437 "name": "raid_bdev1", 00:12:07.437 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:07.438 "strip_size_kb": 0, 00:12:07.438 "state": "configuring", 00:12:07.438 "raid_level": "raid1", 00:12:07.438 "superblock": true, 00:12:07.438 "num_base_bdevs": 4, 00:12:07.438 "num_base_bdevs_discovered": 1, 00:12:07.438 "num_base_bdevs_operational": 3, 00:12:07.438 "base_bdevs_list": [ 00:12:07.438 { 00:12:07.438 "name": null, 00:12:07.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.438 "is_configured": false, 00:12:07.438 "data_offset": 2048, 00:12:07.438 "data_size": 63488 00:12:07.438 }, 00:12:07.438 { 00:12:07.438 "name": "pt2", 00:12:07.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.438 "is_configured": true, 00:12:07.438 "data_offset": 2048, 00:12:07.438 "data_size": 63488 00:12:07.438 }, 00:12:07.438 { 00:12:07.438 "name": null, 00:12:07.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.438 "is_configured": false, 00:12:07.438 "data_offset": 2048, 00:12:07.438 "data_size": 63488 00:12:07.438 }, 00:12:07.438 { 00:12:07.438 "name": null, 00:12:07.438 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.438 "is_configured": false, 00:12:07.438 "data_offset": 2048, 00:12:07.438 "data_size": 63488 00:12:07.438 } 00:12:07.438 ] 00:12:07.438 }' 00:12:07.438 20:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.438 20:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.696 [2024-11-26 20:25:01.224814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:07.696 [2024-11-26 20:25:01.224898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.696 [2024-11-26 20:25:01.224920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:07.696 [2024-11-26 20:25:01.224935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.696 [2024-11-26 20:25:01.225400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.696 [2024-11-26 20:25:01.225433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:07.696 [2024-11-26 20:25:01.225518] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:07.696 [2024-11-26 20:25:01.225550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:07.696 pt3 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.696 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.954 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.954 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.954 "name": "raid_bdev1", 00:12:07.954 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:07.954 "strip_size_kb": 0, 00:12:07.954 "state": "configuring", 00:12:07.954 "raid_level": "raid1", 00:12:07.954 "superblock": true, 00:12:07.954 "num_base_bdevs": 4, 00:12:07.954 "num_base_bdevs_discovered": 2, 00:12:07.955 "num_base_bdevs_operational": 3, 00:12:07.955 "base_bdevs_list": [ 00:12:07.955 { 00:12:07.955 "name": null, 00:12:07.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.955 "is_configured": false, 00:12:07.955 "data_offset": 2048, 00:12:07.955 "data_size": 63488 00:12:07.955 }, 00:12:07.955 { 00:12:07.955 "name": "pt2", 00:12:07.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.955 "is_configured": true, 00:12:07.955 "data_offset": 2048, 00:12:07.955 "data_size": 63488 00:12:07.955 }, 00:12:07.955 { 00:12:07.955 "name": "pt3", 00:12:07.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.955 "is_configured": true, 00:12:07.955 "data_offset": 2048, 00:12:07.955 "data_size": 63488 00:12:07.955 }, 00:12:07.955 { 00:12:07.955 "name": null, 00:12:07.955 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.955 "is_configured": false, 00:12:07.955 "data_offset": 2048, 00:12:07.955 "data_size": 63488 00:12:07.955 } 00:12:07.955 ] 00:12:07.955 }' 00:12:07.955 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.955 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.212 [2024-11-26 20:25:01.704774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:08.212 [2024-11-26 20:25:01.704855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.212 [2024-11-26 20:25:01.704880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:08.212 [2024-11-26 20:25:01.704895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.212 [2024-11-26 20:25:01.705345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.212 [2024-11-26 20:25:01.705378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:08.212 [2024-11-26 20:25:01.705468] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:08.212 [2024-11-26 20:25:01.705509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:08.212 [2024-11-26 20:25:01.705645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:08.212 [2024-11-26 20:25:01.705664] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.212 [2024-11-26 20:25:01.705938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:08.212 [2024-11-26 20:25:01.706092] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:08.212 [2024-11-26 20:25:01.706108] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:12:08.212 [2024-11-26 20:25:01.706232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.212 pt4 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.212 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.212 "name": "raid_bdev1", 00:12:08.213 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:08.213 "strip_size_kb": 0, 00:12:08.213 "state": "online", 00:12:08.213 "raid_level": "raid1", 00:12:08.213 "superblock": true, 00:12:08.213 "num_base_bdevs": 4, 00:12:08.213 "num_base_bdevs_discovered": 3, 00:12:08.213 "num_base_bdevs_operational": 3, 00:12:08.213 "base_bdevs_list": [ 00:12:08.213 { 00:12:08.213 "name": null, 00:12:08.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.213 "is_configured": false, 00:12:08.213 "data_offset": 2048, 00:12:08.213 "data_size": 63488 00:12:08.213 }, 00:12:08.213 { 00:12:08.213 "name": "pt2", 00:12:08.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.213 "is_configured": true, 00:12:08.213 "data_offset": 2048, 00:12:08.213 "data_size": 63488 00:12:08.213 }, 00:12:08.213 { 00:12:08.213 "name": "pt3", 00:12:08.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.213 "is_configured": true, 00:12:08.213 "data_offset": 2048, 00:12:08.213 "data_size": 63488 00:12:08.213 }, 00:12:08.213 { 00:12:08.213 "name": "pt4", 00:12:08.213 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.213 "is_configured": true, 00:12:08.213 "data_offset": 2048, 00:12:08.213 "data_size": 63488 00:12:08.213 } 00:12:08.213 ] 00:12:08.213 }' 00:12:08.213 20:25:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.213 20:25:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.780 [2024-11-26 20:25:02.176768] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.780 [2024-11-26 20:25:02.176807] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.780 [2024-11-26 20:25:02.176902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.780 [2024-11-26 20:25:02.176997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.780 [2024-11-26 20:25:02.177009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.780 [2024-11-26 20:25:02.252838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:08.780 [2024-11-26 20:25:02.252921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.780 [2024-11-26 20:25:02.252951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:08.780 [2024-11-26 20:25:02.252961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.780 [2024-11-26 20:25:02.255497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.780 [2024-11-26 20:25:02.255540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:08.780 [2024-11-26 20:25:02.255642] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:08.780 [2024-11-26 20:25:02.255689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:08.780 [2024-11-26 20:25:02.255815] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:08.780 [2024-11-26 20:25:02.255838] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.780 [2024-11-26 20:25:02.255869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:12:08.780 [2024-11-26 20:25:02.255935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.780 [2024-11-26 20:25:02.256044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:08.780 pt1 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.780 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.781 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.781 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.781 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.781 "name": "raid_bdev1", 00:12:08.781 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:08.781 "strip_size_kb": 0, 00:12:08.781 "state": "configuring", 00:12:08.781 "raid_level": "raid1", 00:12:08.781 "superblock": true, 00:12:08.781 "num_base_bdevs": 4, 00:12:08.781 "num_base_bdevs_discovered": 2, 00:12:08.781 "num_base_bdevs_operational": 3, 00:12:08.781 "base_bdevs_list": [ 00:12:08.781 { 00:12:08.781 "name": null, 00:12:08.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.781 "is_configured": false, 00:12:08.781 "data_offset": 2048, 00:12:08.781 "data_size": 63488 00:12:08.781 }, 00:12:08.781 { 00:12:08.781 "name": "pt2", 00:12:08.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.781 "is_configured": true, 00:12:08.781 "data_offset": 2048, 00:12:08.781 "data_size": 63488 00:12:08.781 }, 00:12:08.781 { 00:12:08.781 "name": "pt3", 00:12:08.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.781 "is_configured": true, 00:12:08.781 "data_offset": 2048, 00:12:08.781 "data_size": 63488 00:12:08.781 }, 00:12:08.781 { 00:12:08.781 "name": null, 00:12:08.781 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.781 "is_configured": false, 00:12:08.781 "data_offset": 2048, 00:12:08.781 "data_size": 63488 00:12:08.781 } 00:12:08.781 ] 00:12:08.781 }' 00:12:08.781 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.781 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.348 [2024-11-26 20:25:02.744770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:09.348 [2024-11-26 20:25:02.744844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.348 [2024-11-26 20:25:02.744867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:09.348 [2024-11-26 20:25:02.744878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.348 [2024-11-26 20:25:02.745341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.348 [2024-11-26 20:25:02.745374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:09.348 [2024-11-26 20:25:02.745455] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:09.348 [2024-11-26 20:25:02.745486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:09.348 [2024-11-26 20:25:02.745592] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:09.348 [2024-11-26 20:25:02.745607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:09.348 [2024-11-26 20:25:02.745883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:09.348 [2024-11-26 20:25:02.746023] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:09.348 [2024-11-26 20:25:02.746036] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:09.348 [2024-11-26 20:25:02.746153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.348 pt4 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.348 "name": "raid_bdev1", 00:12:09.348 "uuid": "d274f6e7-555c-41b5-88a5-da435ad61efb", 00:12:09.348 "strip_size_kb": 0, 00:12:09.348 "state": "online", 00:12:09.348 "raid_level": "raid1", 00:12:09.348 "superblock": true, 00:12:09.348 "num_base_bdevs": 4, 00:12:09.348 "num_base_bdevs_discovered": 3, 00:12:09.348 "num_base_bdevs_operational": 3, 00:12:09.348 "base_bdevs_list": [ 00:12:09.348 { 00:12:09.348 "name": null, 00:12:09.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.348 "is_configured": false, 00:12:09.348 "data_offset": 2048, 00:12:09.348 "data_size": 63488 00:12:09.348 }, 00:12:09.348 { 00:12:09.348 "name": "pt2", 00:12:09.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.348 "is_configured": true, 00:12:09.348 "data_offset": 2048, 00:12:09.348 "data_size": 63488 00:12:09.348 }, 00:12:09.348 { 00:12:09.348 "name": "pt3", 00:12:09.348 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.348 "is_configured": true, 00:12:09.348 "data_offset": 2048, 00:12:09.348 "data_size": 63488 00:12:09.348 }, 00:12:09.348 { 00:12:09.348 "name": "pt4", 00:12:09.348 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:09.348 "is_configured": true, 00:12:09.348 "data_offset": 2048, 00:12:09.348 "data_size": 63488 00:12:09.348 } 00:12:09.348 ] 00:12:09.348 }' 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.348 20:25:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.915 [2024-11-26 20:25:03.197106] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d274f6e7-555c-41b5-88a5-da435ad61efb '!=' d274f6e7-555c-41b5-88a5-da435ad61efb ']' 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85791 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85791 ']' 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85791 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85791 00:12:09.915 killing process with pid 85791 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85791' 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85791 00:12:09.915 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85791 00:12:09.915 [2024-11-26 20:25:03.261055] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.915 [2024-11-26 20:25:03.261157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.915 [2024-11-26 20:25:03.261247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.915 [2024-11-26 20:25:03.261267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:09.915 [2024-11-26 20:25:03.333172] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.173 20:25:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:10.173 00:12:10.173 real 0m7.516s 00:12:10.173 user 0m12.529s 00:12:10.173 sys 0m1.624s 00:12:10.173 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.173 20:25:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.173 ************************************ 00:12:10.173 END TEST raid_superblock_test 00:12:10.173 ************************************ 00:12:10.432 20:25:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:10.432 20:25:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:10.432 20:25:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.432 20:25:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.432 ************************************ 00:12:10.432 START TEST raid_read_error_test 00:12:10.432 ************************************ 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yvHHploemr 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86268 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86268 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 86268 ']' 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:10.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:10.432 20:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.432 [2024-11-26 20:25:03.871476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:10.432 [2024-11-26 20:25:03.871611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86268 ] 00:12:10.692 [2024-11-26 20:25:04.034058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.692 [2024-11-26 20:25:04.112331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.692 [2024-11-26 20:25:04.189015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.692 [2024-11-26 20:25:04.189062] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.258 BaseBdev1_malloc 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.258 true 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.258 [2024-11-26 20:25:04.752094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:11.258 [2024-11-26 20:25:04.752158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.258 [2024-11-26 20:25:04.752185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:11.258 [2024-11-26 20:25:04.752200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.258 [2024-11-26 20:25:04.754555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.258 [2024-11-26 20:25:04.754590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:11.258 BaseBdev1 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.258 BaseBdev2_malloc 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.258 true 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.258 [2024-11-26 20:25:04.802571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:11.258 [2024-11-26 20:25:04.802637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.258 [2024-11-26 20:25:04.802657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:11.258 [2024-11-26 20:25:04.802667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.258 [2024-11-26 20:25:04.805028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.258 [2024-11-26 20:25:04.805064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:11.258 BaseBdev2 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.258 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.516 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:11.516 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.516 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.516 BaseBdev3_malloc 00:12:11.516 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.516 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:11.516 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.516 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.516 true 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 [2024-11-26 20:25:04.849363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:11.517 [2024-11-26 20:25:04.849413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.517 [2024-11-26 20:25:04.849435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:11.517 [2024-11-26 20:25:04.849446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.517 [2024-11-26 20:25:04.851841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.517 [2024-11-26 20:25:04.851877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:11.517 BaseBdev3 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 BaseBdev4_malloc 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 true 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 [2024-11-26 20:25:04.892692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:11.517 [2024-11-26 20:25:04.892737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.517 [2024-11-26 20:25:04.892757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:11.517 [2024-11-26 20:25:04.892766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.517 [2024-11-26 20:25:04.894917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.517 [2024-11-26 20:25:04.894948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:11.517 BaseBdev4 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 [2024-11-26 20:25:04.904731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.517 [2024-11-26 20:25:04.906660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.517 [2024-11-26 20:25:04.906745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.517 [2024-11-26 20:25:04.906798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.517 [2024-11-26 20:25:04.907022] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:12:11.517 [2024-11-26 20:25:04.907041] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.517 [2024-11-26 20:25:04.907299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:11.517 [2024-11-26 20:25:04.907450] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:12:11.517 [2024-11-26 20:25:04.907467] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:12:11.517 [2024-11-26 20:25:04.907582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.517 "name": "raid_bdev1", 00:12:11.517 "uuid": "7f319c36-6edc-4cd0-a287-34822c347926", 00:12:11.517 "strip_size_kb": 0, 00:12:11.517 "state": "online", 00:12:11.517 "raid_level": "raid1", 00:12:11.517 "superblock": true, 00:12:11.517 "num_base_bdevs": 4, 00:12:11.517 "num_base_bdevs_discovered": 4, 00:12:11.517 "num_base_bdevs_operational": 4, 00:12:11.517 "base_bdevs_list": [ 00:12:11.517 { 00:12:11.517 "name": "BaseBdev1", 00:12:11.517 "uuid": "a46c4022-2c30-568b-ae59-06380779f24e", 00:12:11.517 "is_configured": true, 00:12:11.517 "data_offset": 2048, 00:12:11.517 "data_size": 63488 00:12:11.517 }, 00:12:11.517 { 00:12:11.517 "name": "BaseBdev2", 00:12:11.517 "uuid": "21111f5d-cea2-537b-bf33-c710a9b28546", 00:12:11.517 "is_configured": true, 00:12:11.517 "data_offset": 2048, 00:12:11.517 "data_size": 63488 00:12:11.517 }, 00:12:11.517 { 00:12:11.517 "name": "BaseBdev3", 00:12:11.517 "uuid": "7f4ff813-0a27-5ad2-8429-8ccf10875d4a", 00:12:11.517 "is_configured": true, 00:12:11.517 "data_offset": 2048, 00:12:11.517 "data_size": 63488 00:12:11.517 }, 00:12:11.517 { 00:12:11.517 "name": "BaseBdev4", 00:12:11.517 "uuid": "49c11c4d-7846-5382-a467-85adf5b34023", 00:12:11.517 "is_configured": true, 00:12:11.517 "data_offset": 2048, 00:12:11.517 "data_size": 63488 00:12:11.517 } 00:12:11.517 ] 00:12:11.517 }' 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.517 20:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.084 20:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:12.084 20:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:12.084 [2024-11-26 20:25:05.448191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.021 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.021 "name": "raid_bdev1", 00:12:13.021 "uuid": "7f319c36-6edc-4cd0-a287-34822c347926", 00:12:13.021 "strip_size_kb": 0, 00:12:13.021 "state": "online", 00:12:13.021 "raid_level": "raid1", 00:12:13.021 "superblock": true, 00:12:13.021 "num_base_bdevs": 4, 00:12:13.021 "num_base_bdevs_discovered": 4, 00:12:13.021 "num_base_bdevs_operational": 4, 00:12:13.021 "base_bdevs_list": [ 00:12:13.021 { 00:12:13.021 "name": "BaseBdev1", 00:12:13.021 "uuid": "a46c4022-2c30-568b-ae59-06380779f24e", 00:12:13.021 "is_configured": true, 00:12:13.021 "data_offset": 2048, 00:12:13.021 "data_size": 63488 00:12:13.021 }, 00:12:13.021 { 00:12:13.021 "name": "BaseBdev2", 00:12:13.021 "uuid": "21111f5d-cea2-537b-bf33-c710a9b28546", 00:12:13.021 "is_configured": true, 00:12:13.021 "data_offset": 2048, 00:12:13.021 "data_size": 63488 00:12:13.021 }, 00:12:13.021 { 00:12:13.021 "name": "BaseBdev3", 00:12:13.021 "uuid": "7f4ff813-0a27-5ad2-8429-8ccf10875d4a", 00:12:13.021 "is_configured": true, 00:12:13.021 "data_offset": 2048, 00:12:13.021 "data_size": 63488 00:12:13.021 }, 00:12:13.021 { 00:12:13.021 "name": "BaseBdev4", 00:12:13.021 "uuid": "49c11c4d-7846-5382-a467-85adf5b34023", 00:12:13.021 "is_configured": true, 00:12:13.021 "data_offset": 2048, 00:12:13.021 "data_size": 63488 00:12:13.021 } 00:12:13.021 ] 00:12:13.021 }' 00:12:13.022 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.022 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.280 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.280 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.280 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.280 [2024-11-26 20:25:06.809523] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.280 [2024-11-26 20:25:06.809570] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.280 [2024-11-26 20:25:06.812611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.280 [2024-11-26 20:25:06.812686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.280 [2024-11-26 20:25:06.812829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.280 [2024-11-26 20:25:06.812845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:12:13.280 { 00:12:13.280 "results": [ 00:12:13.280 { 00:12:13.280 "job": "raid_bdev1", 00:12:13.280 "core_mask": "0x1", 00:12:13.280 "workload": "randrw", 00:12:13.280 "percentage": 50, 00:12:13.280 "status": "finished", 00:12:13.280 "queue_depth": 1, 00:12:13.280 "io_size": 131072, 00:12:13.280 "runtime": 1.362086, 00:12:13.280 "iops": 8548.652581408222, 00:12:13.280 "mibps": 1068.5815726760277, 00:12:13.280 "io_failed": 0, 00:12:13.280 "io_timeout": 0, 00:12:13.280 "avg_latency_us": 114.12821296722716, 00:12:13.280 "min_latency_us": 23.699563318777294, 00:12:13.280 "max_latency_us": 2074.829694323144 00:12:13.280 } 00:12:13.280 ], 00:12:13.280 "core_count": 1 00:12:13.280 } 00:12:13.280 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.281 20:25:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86268 00:12:13.281 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 86268 ']' 00:12:13.281 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 86268 00:12:13.281 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:13.281 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.281 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86268 00:12:13.539 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.539 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.539 killing process with pid 86268 00:12:13.539 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86268' 00:12:13.539 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 86268 00:12:13.539 [2024-11-26 20:25:06.857015] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.539 20:25:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 86268 00:12:13.539 [2024-11-26 20:25:06.918618] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yvHHploemr 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:13.799 00:12:13.799 real 0m3.524s 00:12:13.799 user 0m4.319s 00:12:13.799 sys 0m0.662s 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.799 20:25:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.799 ************************************ 00:12:13.799 END TEST raid_read_error_test 00:12:13.799 ************************************ 00:12:13.799 20:25:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:13.799 20:25:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:13.799 20:25:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.799 20:25:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.799 ************************************ 00:12:13.799 START TEST raid_write_error_test 00:12:13.799 ************************************ 00:12:13.799 20:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:13.799 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XoIOtvXpzm 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86408 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86408 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 86408 ']' 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.058 20:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.059 20:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.059 20:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.059 20:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.059 [2024-11-26 20:25:07.455479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:14.059 [2024-11-26 20:25:07.455627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86408 ] 00:12:14.318 [2024-11-26 20:25:07.618703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.318 [2024-11-26 20:25:07.699571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.318 [2024-11-26 20:25:07.773363] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.318 [2024-11-26 20:25:07.773403] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.885 BaseBdev1_malloc 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.885 true 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.885 [2024-11-26 20:25:08.381689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:14.885 [2024-11-26 20:25:08.381753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.885 [2024-11-26 20:25:08.381778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:14.885 [2024-11-26 20:25:08.381798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.885 [2024-11-26 20:25:08.384254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.885 [2024-11-26 20:25:08.384290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:14.885 BaseBdev1 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.885 BaseBdev2_malloc 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.885 true 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.885 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 [2024-11-26 20:25:08.437696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:15.145 [2024-11-26 20:25:08.437749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.145 [2024-11-26 20:25:08.437770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:15.145 [2024-11-26 20:25:08.437779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.145 [2024-11-26 20:25:08.439998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.145 [2024-11-26 20:25:08.440032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:15.145 BaseBdev2 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 BaseBdev3_malloc 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 true 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 [2024-11-26 20:25:08.480267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:15.145 [2024-11-26 20:25:08.480317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.145 [2024-11-26 20:25:08.480336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:15.145 [2024-11-26 20:25:08.480345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.145 [2024-11-26 20:25:08.482454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.145 [2024-11-26 20:25:08.482487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:15.145 BaseBdev3 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 BaseBdev4_malloc 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 true 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 [2024-11-26 20:25:08.526784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:15.145 [2024-11-26 20:25:08.526836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.145 [2024-11-26 20:25:08.526861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:15.145 [2024-11-26 20:25:08.526870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.145 [2024-11-26 20:25:08.529005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.145 [2024-11-26 20:25:08.529041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:15.145 BaseBdev4 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 [2024-11-26 20:25:08.538809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.145 [2024-11-26 20:25:08.540839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.145 [2024-11-26 20:25:08.540939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.145 [2024-11-26 20:25:08.541001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.145 [2024-11-26 20:25:08.541267] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:12:15.145 [2024-11-26 20:25:08.541285] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.145 [2024-11-26 20:25:08.541575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:15.145 [2024-11-26 20:25:08.541755] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:12:15.145 [2024-11-26 20:25:08.541777] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:12:15.145 [2024-11-26 20:25:08.541917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.145 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.145 "name": "raid_bdev1", 00:12:15.145 "uuid": "910518b2-be3f-4eeb-bd35-dac2e0fe8b53", 00:12:15.145 "strip_size_kb": 0, 00:12:15.145 "state": "online", 00:12:15.145 "raid_level": "raid1", 00:12:15.145 "superblock": true, 00:12:15.145 "num_base_bdevs": 4, 00:12:15.145 "num_base_bdevs_discovered": 4, 00:12:15.145 "num_base_bdevs_operational": 4, 00:12:15.145 "base_bdevs_list": [ 00:12:15.145 { 00:12:15.145 "name": "BaseBdev1", 00:12:15.145 "uuid": "c43e5fcc-e221-5859-bbc4-cd030a27153e", 00:12:15.145 "is_configured": true, 00:12:15.145 "data_offset": 2048, 00:12:15.145 "data_size": 63488 00:12:15.145 }, 00:12:15.145 { 00:12:15.145 "name": "BaseBdev2", 00:12:15.145 "uuid": "466018a7-da78-5897-81e5-0d669bef0e6c", 00:12:15.145 "is_configured": true, 00:12:15.145 "data_offset": 2048, 00:12:15.145 "data_size": 63488 00:12:15.145 }, 00:12:15.145 { 00:12:15.145 "name": "BaseBdev3", 00:12:15.145 "uuid": "c7c4e37b-cd68-5ab8-bab4-61b7089deec2", 00:12:15.145 "is_configured": true, 00:12:15.145 "data_offset": 2048, 00:12:15.145 "data_size": 63488 00:12:15.145 }, 00:12:15.146 { 00:12:15.146 "name": "BaseBdev4", 00:12:15.146 "uuid": "46358f89-e42e-5744-9d51-f4f6504c4ea7", 00:12:15.146 "is_configured": true, 00:12:15.146 "data_offset": 2048, 00:12:15.146 "data_size": 63488 00:12:15.146 } 00:12:15.146 ] 00:12:15.146 }' 00:12:15.146 20:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.146 20:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.714 20:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:15.714 20:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:15.714 [2024-11-26 20:25:09.126245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.650 [2024-11-26 20:25:10.041411] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:16.650 [2024-11-26 20:25:10.041464] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.650 [2024-11-26 20:25:10.041710] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.650 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.650 "name": "raid_bdev1", 00:12:16.650 "uuid": "910518b2-be3f-4eeb-bd35-dac2e0fe8b53", 00:12:16.650 "strip_size_kb": 0, 00:12:16.650 "state": "online", 00:12:16.650 "raid_level": "raid1", 00:12:16.650 "superblock": true, 00:12:16.650 "num_base_bdevs": 4, 00:12:16.650 "num_base_bdevs_discovered": 3, 00:12:16.650 "num_base_bdevs_operational": 3, 00:12:16.650 "base_bdevs_list": [ 00:12:16.650 { 00:12:16.650 "name": null, 00:12:16.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.651 "is_configured": false, 00:12:16.651 "data_offset": 0, 00:12:16.651 "data_size": 63488 00:12:16.651 }, 00:12:16.651 { 00:12:16.651 "name": "BaseBdev2", 00:12:16.651 "uuid": "466018a7-da78-5897-81e5-0d669bef0e6c", 00:12:16.651 "is_configured": true, 00:12:16.651 "data_offset": 2048, 00:12:16.651 "data_size": 63488 00:12:16.651 }, 00:12:16.651 { 00:12:16.651 "name": "BaseBdev3", 00:12:16.651 "uuid": "c7c4e37b-cd68-5ab8-bab4-61b7089deec2", 00:12:16.651 "is_configured": true, 00:12:16.651 "data_offset": 2048, 00:12:16.651 "data_size": 63488 00:12:16.651 }, 00:12:16.651 { 00:12:16.651 "name": "BaseBdev4", 00:12:16.651 "uuid": "46358f89-e42e-5744-9d51-f4f6504c4ea7", 00:12:16.651 "is_configured": true, 00:12:16.651 "data_offset": 2048, 00:12:16.651 "data_size": 63488 00:12:16.651 } 00:12:16.651 ] 00:12:16.651 }' 00:12:16.651 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.651 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.218 [2024-11-26 20:25:10.527304] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.218 [2024-11-26 20:25:10.527344] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.218 [2024-11-26 20:25:10.530272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.218 [2024-11-26 20:25:10.530378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.218 [2024-11-26 20:25:10.530485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.218 [2024-11-26 20:25:10.530505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:12:17.218 { 00:12:17.218 "results": [ 00:12:17.218 { 00:12:17.218 "job": "raid_bdev1", 00:12:17.218 "core_mask": "0x1", 00:12:17.218 "workload": "randrw", 00:12:17.218 "percentage": 50, 00:12:17.218 "status": "finished", 00:12:17.218 "queue_depth": 1, 00:12:17.218 "io_size": 131072, 00:12:17.218 "runtime": 1.401768, 00:12:17.218 "iops": 9465.903059564778, 00:12:17.218 "mibps": 1183.2378824455973, 00:12:17.218 "io_failed": 0, 00:12:17.218 "io_timeout": 0, 00:12:17.218 "avg_latency_us": 102.79621180931619, 00:12:17.218 "min_latency_us": 23.699563318777294, 00:12:17.218 "max_latency_us": 1523.926637554585 00:12:17.218 } 00:12:17.218 ], 00:12:17.218 "core_count": 1 00:12:17.218 } 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86408 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 86408 ']' 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 86408 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86408 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.218 killing process with pid 86408 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86408' 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 86408 00:12:17.218 [2024-11-26 20:25:10.575422] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.218 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 86408 00:12:17.219 [2024-11-26 20:25:10.629086] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XoIOtvXpzm 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.478 ************************************ 00:12:17.478 END TEST raid_write_error_test 00:12:17.478 ************************************ 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:17.478 00:12:17.478 real 0m3.649s 00:12:17.478 user 0m4.551s 00:12:17.478 sys 0m0.653s 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.478 20:25:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 20:25:11 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:17.737 20:25:11 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:17.737 20:25:11 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:17.737 20:25:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:17.737 20:25:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.737 20:25:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 ************************************ 00:12:17.737 START TEST raid_rebuild_test 00:12:17.737 ************************************ 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86541 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86541 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86541 ']' 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.737 20:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.737 [2024-11-26 20:25:11.165428] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:17.737 [2024-11-26 20:25:11.165673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86541 ] 00:12:17.737 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.737 Zero copy mechanism will not be used. 00:12:17.996 [2024-11-26 20:25:11.325956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.996 [2024-11-26 20:25:11.403204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.996 [2024-11-26 20:25:11.479853] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.996 [2024-11-26 20:25:11.479967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.563 BaseBdev1_malloc 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.563 [2024-11-26 20:25:12.045500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:18.563 [2024-11-26 20:25:12.045572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.563 [2024-11-26 20:25:12.045604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.563 [2024-11-26 20:25:12.045645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.563 [2024-11-26 20:25:12.047875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.563 [2024-11-26 20:25:12.047965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.563 BaseBdev1 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.563 BaseBdev2_malloc 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.563 [2024-11-26 20:25:12.086173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:18.563 [2024-11-26 20:25:12.086254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.563 [2024-11-26 20:25:12.086288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.563 [2024-11-26 20:25:12.086302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.563 [2024-11-26 20:25:12.089471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.563 [2024-11-26 20:25:12.089570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.563 BaseBdev2 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.563 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.821 spare_malloc 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.821 spare_delay 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.821 [2024-11-26 20:25:12.133228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.821 [2024-11-26 20:25:12.133292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.821 [2024-11-26 20:25:12.133316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:18.821 [2024-11-26 20:25:12.133325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.821 [2024-11-26 20:25:12.135474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.821 [2024-11-26 20:25:12.135512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.821 spare 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.821 [2024-11-26 20:25:12.145243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.821 [2024-11-26 20:25:12.147140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.821 [2024-11-26 20:25:12.147231] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:18.821 [2024-11-26 20:25:12.147250] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:18.821 [2024-11-26 20:25:12.147503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:18.821 [2024-11-26 20:25:12.147643] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:18.821 [2024-11-26 20:25:12.147657] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:18.821 [2024-11-26 20:25:12.147793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.821 "name": "raid_bdev1", 00:12:18.821 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:18.821 "strip_size_kb": 0, 00:12:18.821 "state": "online", 00:12:18.821 "raid_level": "raid1", 00:12:18.821 "superblock": false, 00:12:18.821 "num_base_bdevs": 2, 00:12:18.821 "num_base_bdevs_discovered": 2, 00:12:18.821 "num_base_bdevs_operational": 2, 00:12:18.821 "base_bdevs_list": [ 00:12:18.821 { 00:12:18.821 "name": "BaseBdev1", 00:12:18.821 "uuid": "1a771361-67bb-568e-abca-c23046b9ba53", 00:12:18.821 "is_configured": true, 00:12:18.821 "data_offset": 0, 00:12:18.821 "data_size": 65536 00:12:18.821 }, 00:12:18.821 { 00:12:18.821 "name": "BaseBdev2", 00:12:18.821 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:18.821 "is_configured": true, 00:12:18.821 "data_offset": 0, 00:12:18.821 "data_size": 65536 00:12:18.821 } 00:12:18.821 ] 00:12:18.821 }' 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.821 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.080 [2024-11-26 20:25:12.573022] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.080 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:19.338 [2024-11-26 20:25:12.828320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:19.338 /dev/nbd0 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.338 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.338 1+0 records in 00:12:19.338 1+0 records out 00:12:19.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057345 s, 7.1 MB/s 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:19.598 20:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:23.838 65536+0 records in 00:12:23.838 65536+0 records out 00:12:23.838 33554432 bytes (34 MB, 32 MiB) copied, 4.29147 s, 7.8 MB/s 00:12:23.838 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:23.838 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.838 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:23.838 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.838 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:23.838 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.838 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.097 [2024-11-26 20:25:17.429868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.097 [2024-11-26 20:25:17.445981] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.097 "name": "raid_bdev1", 00:12:24.097 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:24.097 "strip_size_kb": 0, 00:12:24.097 "state": "online", 00:12:24.097 "raid_level": "raid1", 00:12:24.097 "superblock": false, 00:12:24.097 "num_base_bdevs": 2, 00:12:24.097 "num_base_bdevs_discovered": 1, 00:12:24.097 "num_base_bdevs_operational": 1, 00:12:24.097 "base_bdevs_list": [ 00:12:24.097 { 00:12:24.097 "name": null, 00:12:24.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.097 "is_configured": false, 00:12:24.097 "data_offset": 0, 00:12:24.097 "data_size": 65536 00:12:24.097 }, 00:12:24.097 { 00:12:24.097 "name": "BaseBdev2", 00:12:24.097 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:24.097 "is_configured": true, 00:12:24.097 "data_offset": 0, 00:12:24.097 "data_size": 65536 00:12:24.097 } 00:12:24.097 ] 00:12:24.097 }' 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.097 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.671 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.671 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.671 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.671 [2024-11-26 20:25:17.921191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.671 [2024-11-26 20:25:17.926975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:12:24.671 20:25:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.671 20:25:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:24.671 [2024-11-26 20:25:17.928984] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.606 "name": "raid_bdev1", 00:12:25.606 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:25.606 "strip_size_kb": 0, 00:12:25.606 "state": "online", 00:12:25.606 "raid_level": "raid1", 00:12:25.606 "superblock": false, 00:12:25.606 "num_base_bdevs": 2, 00:12:25.606 "num_base_bdevs_discovered": 2, 00:12:25.606 "num_base_bdevs_operational": 2, 00:12:25.606 "process": { 00:12:25.606 "type": "rebuild", 00:12:25.606 "target": "spare", 00:12:25.606 "progress": { 00:12:25.606 "blocks": 20480, 00:12:25.606 "percent": 31 00:12:25.606 } 00:12:25.606 }, 00:12:25.606 "base_bdevs_list": [ 00:12:25.606 { 00:12:25.606 "name": "spare", 00:12:25.606 "uuid": "ee2a7c7f-6169-547a-9dc5-bc72e7afa5ec", 00:12:25.606 "is_configured": true, 00:12:25.606 "data_offset": 0, 00:12:25.606 "data_size": 65536 00:12:25.606 }, 00:12:25.606 { 00:12:25.606 "name": "BaseBdev2", 00:12:25.606 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:25.606 "is_configured": true, 00:12:25.606 "data_offset": 0, 00:12:25.606 "data_size": 65536 00:12:25.606 } 00:12:25.606 ] 00:12:25.606 }' 00:12:25.606 20:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.606 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.606 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.606 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.606 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:25.606 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.606 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.606 [2024-11-26 20:25:19.090432] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.606 [2024-11-26 20:25:19.138344] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:25.606 [2024-11-26 20:25:19.138429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.606 [2024-11-26 20:25:19.138452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.606 [2024-11-26 20:25:19.138461] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.864 "name": "raid_bdev1", 00:12:25.864 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:25.864 "strip_size_kb": 0, 00:12:25.864 "state": "online", 00:12:25.864 "raid_level": "raid1", 00:12:25.864 "superblock": false, 00:12:25.864 "num_base_bdevs": 2, 00:12:25.864 "num_base_bdevs_discovered": 1, 00:12:25.864 "num_base_bdevs_operational": 1, 00:12:25.864 "base_bdevs_list": [ 00:12:25.864 { 00:12:25.864 "name": null, 00:12:25.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.864 "is_configured": false, 00:12:25.864 "data_offset": 0, 00:12:25.864 "data_size": 65536 00:12:25.864 }, 00:12:25.864 { 00:12:25.864 "name": "BaseBdev2", 00:12:25.864 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:25.864 "is_configured": true, 00:12:25.864 "data_offset": 0, 00:12:25.864 "data_size": 65536 00:12:25.864 } 00:12:25.864 ] 00:12:25.864 }' 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.864 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.122 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.122 "name": "raid_bdev1", 00:12:26.122 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:26.122 "strip_size_kb": 0, 00:12:26.122 "state": "online", 00:12:26.122 "raid_level": "raid1", 00:12:26.122 "superblock": false, 00:12:26.122 "num_base_bdevs": 2, 00:12:26.122 "num_base_bdevs_discovered": 1, 00:12:26.122 "num_base_bdevs_operational": 1, 00:12:26.122 "base_bdevs_list": [ 00:12:26.122 { 00:12:26.122 "name": null, 00:12:26.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.122 "is_configured": false, 00:12:26.122 "data_offset": 0, 00:12:26.122 "data_size": 65536 00:12:26.122 }, 00:12:26.122 { 00:12:26.122 "name": "BaseBdev2", 00:12:26.122 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:26.122 "is_configured": true, 00:12:26.122 "data_offset": 0, 00:12:26.122 "data_size": 65536 00:12:26.122 } 00:12:26.122 ] 00:12:26.122 }' 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.381 [2024-11-26 20:25:19.776714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.381 [2024-11-26 20:25:19.782706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.381 20:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:26.381 [2024-11-26 20:25:19.784962] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.317 "name": "raid_bdev1", 00:12:27.317 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:27.317 "strip_size_kb": 0, 00:12:27.317 "state": "online", 00:12:27.317 "raid_level": "raid1", 00:12:27.317 "superblock": false, 00:12:27.317 "num_base_bdevs": 2, 00:12:27.317 "num_base_bdevs_discovered": 2, 00:12:27.317 "num_base_bdevs_operational": 2, 00:12:27.317 "process": { 00:12:27.317 "type": "rebuild", 00:12:27.317 "target": "spare", 00:12:27.317 "progress": { 00:12:27.317 "blocks": 20480, 00:12:27.317 "percent": 31 00:12:27.317 } 00:12:27.317 }, 00:12:27.317 "base_bdevs_list": [ 00:12:27.317 { 00:12:27.317 "name": "spare", 00:12:27.317 "uuid": "ee2a7c7f-6169-547a-9dc5-bc72e7afa5ec", 00:12:27.317 "is_configured": true, 00:12:27.317 "data_offset": 0, 00:12:27.317 "data_size": 65536 00:12:27.317 }, 00:12:27.317 { 00:12:27.317 "name": "BaseBdev2", 00:12:27.317 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:27.317 "is_configured": true, 00:12:27.317 "data_offset": 0, 00:12:27.317 "data_size": 65536 00:12:27.317 } 00:12:27.317 ] 00:12:27.317 }' 00:12:27.317 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=309 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.576 "name": "raid_bdev1", 00:12:27.576 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:27.576 "strip_size_kb": 0, 00:12:27.576 "state": "online", 00:12:27.576 "raid_level": "raid1", 00:12:27.576 "superblock": false, 00:12:27.576 "num_base_bdevs": 2, 00:12:27.576 "num_base_bdevs_discovered": 2, 00:12:27.576 "num_base_bdevs_operational": 2, 00:12:27.576 "process": { 00:12:27.576 "type": "rebuild", 00:12:27.576 "target": "spare", 00:12:27.576 "progress": { 00:12:27.576 "blocks": 22528, 00:12:27.576 "percent": 34 00:12:27.576 } 00:12:27.576 }, 00:12:27.576 "base_bdevs_list": [ 00:12:27.576 { 00:12:27.576 "name": "spare", 00:12:27.576 "uuid": "ee2a7c7f-6169-547a-9dc5-bc72e7afa5ec", 00:12:27.576 "is_configured": true, 00:12:27.576 "data_offset": 0, 00:12:27.576 "data_size": 65536 00:12:27.576 }, 00:12:27.576 { 00:12:27.576 "name": "BaseBdev2", 00:12:27.576 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:27.576 "is_configured": true, 00:12:27.576 "data_offset": 0, 00:12:27.576 "data_size": 65536 00:12:27.576 } 00:12:27.576 ] 00:12:27.576 }' 00:12:27.576 20:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.576 20:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.576 20:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.576 20:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.576 20:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.952 "name": "raid_bdev1", 00:12:28.952 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:28.952 "strip_size_kb": 0, 00:12:28.952 "state": "online", 00:12:28.952 "raid_level": "raid1", 00:12:28.952 "superblock": false, 00:12:28.952 "num_base_bdevs": 2, 00:12:28.952 "num_base_bdevs_discovered": 2, 00:12:28.952 "num_base_bdevs_operational": 2, 00:12:28.952 "process": { 00:12:28.952 "type": "rebuild", 00:12:28.952 "target": "spare", 00:12:28.952 "progress": { 00:12:28.952 "blocks": 45056, 00:12:28.952 "percent": 68 00:12:28.952 } 00:12:28.952 }, 00:12:28.952 "base_bdevs_list": [ 00:12:28.952 { 00:12:28.952 "name": "spare", 00:12:28.952 "uuid": "ee2a7c7f-6169-547a-9dc5-bc72e7afa5ec", 00:12:28.952 "is_configured": true, 00:12:28.952 "data_offset": 0, 00:12:28.952 "data_size": 65536 00:12:28.952 }, 00:12:28.952 { 00:12:28.952 "name": "BaseBdev2", 00:12:28.952 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:28.952 "is_configured": true, 00:12:28.952 "data_offset": 0, 00:12:28.952 "data_size": 65536 00:12:28.952 } 00:12:28.952 ] 00:12:28.952 }' 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.952 20:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.520 [2024-11-26 20:25:23.007563] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:29.520 [2024-11-26 20:25:23.007676] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:29.520 [2024-11-26 20:25:23.007720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.778 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.778 "name": "raid_bdev1", 00:12:29.778 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:29.778 "strip_size_kb": 0, 00:12:29.778 "state": "online", 00:12:29.778 "raid_level": "raid1", 00:12:29.778 "superblock": false, 00:12:29.778 "num_base_bdevs": 2, 00:12:29.778 "num_base_bdevs_discovered": 2, 00:12:29.778 "num_base_bdevs_operational": 2, 00:12:29.778 "base_bdevs_list": [ 00:12:29.778 { 00:12:29.778 "name": "spare", 00:12:29.778 "uuid": "ee2a7c7f-6169-547a-9dc5-bc72e7afa5ec", 00:12:29.778 "is_configured": true, 00:12:29.778 "data_offset": 0, 00:12:29.778 "data_size": 65536 00:12:29.778 }, 00:12:29.778 { 00:12:29.778 "name": "BaseBdev2", 00:12:29.778 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:29.778 "is_configured": true, 00:12:29.778 "data_offset": 0, 00:12:29.778 "data_size": 65536 00:12:29.778 } 00:12:29.778 ] 00:12:29.778 }' 00:12:29.779 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.779 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:29.779 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.037 "name": "raid_bdev1", 00:12:30.037 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:30.037 "strip_size_kb": 0, 00:12:30.037 "state": "online", 00:12:30.037 "raid_level": "raid1", 00:12:30.037 "superblock": false, 00:12:30.037 "num_base_bdevs": 2, 00:12:30.037 "num_base_bdevs_discovered": 2, 00:12:30.037 "num_base_bdevs_operational": 2, 00:12:30.037 "base_bdevs_list": [ 00:12:30.037 { 00:12:30.037 "name": "spare", 00:12:30.037 "uuid": "ee2a7c7f-6169-547a-9dc5-bc72e7afa5ec", 00:12:30.037 "is_configured": true, 00:12:30.037 "data_offset": 0, 00:12:30.037 "data_size": 65536 00:12:30.037 }, 00:12:30.037 { 00:12:30.037 "name": "BaseBdev2", 00:12:30.037 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:30.037 "is_configured": true, 00:12:30.037 "data_offset": 0, 00:12:30.037 "data_size": 65536 00:12:30.037 } 00:12:30.037 ] 00:12:30.037 }' 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.037 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.037 "name": "raid_bdev1", 00:12:30.037 "uuid": "35cbfefb-7def-4dc2-afff-96cd2d3a9efa", 00:12:30.037 "strip_size_kb": 0, 00:12:30.037 "state": "online", 00:12:30.037 "raid_level": "raid1", 00:12:30.037 "superblock": false, 00:12:30.037 "num_base_bdevs": 2, 00:12:30.037 "num_base_bdevs_discovered": 2, 00:12:30.037 "num_base_bdevs_operational": 2, 00:12:30.037 "base_bdevs_list": [ 00:12:30.037 { 00:12:30.037 "name": "spare", 00:12:30.037 "uuid": "ee2a7c7f-6169-547a-9dc5-bc72e7afa5ec", 00:12:30.037 "is_configured": true, 00:12:30.038 "data_offset": 0, 00:12:30.038 "data_size": 65536 00:12:30.038 }, 00:12:30.038 { 00:12:30.038 "name": "BaseBdev2", 00:12:30.038 "uuid": "7217a566-fe86-5ac6-ad93-eb2a61b9cbfd", 00:12:30.038 "is_configured": true, 00:12:30.038 "data_offset": 0, 00:12:30.038 "data_size": 65536 00:12:30.038 } 00:12:30.038 ] 00:12:30.038 }' 00:12:30.038 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.038 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.604 [2024-11-26 20:25:23.961091] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.604 [2024-11-26 20:25:23.961198] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.604 [2024-11-26 20:25:23.961327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.604 [2024-11-26 20:25:23.961466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.604 [2024-11-26 20:25:23.961532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.604 20:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.604 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:30.863 /dev/nbd0 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.863 1+0 records in 00:12:30.863 1+0 records out 00:12:30.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035055 s, 11.7 MB/s 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.863 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:31.125 /dev/nbd1 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.125 1+0 records in 00:12:31.125 1+0 records out 00:12:31.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431542 s, 9.5 MB/s 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.125 20:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:31.384 20:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:31.384 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.384 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:31.384 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.384 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:31.384 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.384 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.641 20:25:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:31.641 20:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86541 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86541 ']' 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86541 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86541 00:12:31.900 killing process with pid 86541 00:12:31.900 Received shutdown signal, test time was about 60.000000 seconds 00:12:31.900 00:12:31.900 Latency(us) 00:12:31.900 [2024-11-26T20:25:25.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.900 [2024-11-26T20:25:25.452Z] =================================================================================================================== 00:12:31.900 [2024-11-26T20:25:25.452Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86541' 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86541 00:12:31.900 [2024-11-26 20:25:25.236697] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.900 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86541 00:12:31.900 [2024-11-26 20:25:25.288154] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.159 20:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:32.159 00:12:32.159 real 0m14.569s 00:12:32.159 user 0m16.813s 00:12:32.159 sys 0m3.146s 00:12:32.159 ************************************ 00:12:32.159 END TEST raid_rebuild_test 00:12:32.159 ************************************ 00:12:32.159 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.159 20:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.159 20:25:25 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:32.159 20:25:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:32.159 20:25:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.159 20:25:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.419 ************************************ 00:12:32.419 START TEST raid_rebuild_test_sb 00:12:32.419 ************************************ 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86952 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86952 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86952 ']' 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.419 20:25:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.419 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:32.419 Zero copy mechanism will not be used. 00:12:32.419 [2024-11-26 20:25:25.810725] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:32.419 [2024-11-26 20:25:25.810844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86952 ] 00:12:32.679 [2024-11-26 20:25:25.973953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.679 [2024-11-26 20:25:26.052753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.679 [2024-11-26 20:25:26.131006] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.679 [2024-11-26 20:25:26.131044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.266 BaseBdev1_malloc 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.266 [2024-11-26 20:25:26.709522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:33.266 [2024-11-26 20:25:26.709703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.266 [2024-11-26 20:25:26.709790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:33.266 [2024-11-26 20:25:26.709870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.266 [2024-11-26 20:25:26.712491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.266 [2024-11-26 20:25:26.712581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.266 BaseBdev1 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.266 BaseBdev2_malloc 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.266 [2024-11-26 20:25:26.747424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:33.266 [2024-11-26 20:25:26.747600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.266 [2024-11-26 20:25:26.747712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:33.266 [2024-11-26 20:25:26.747772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.266 [2024-11-26 20:25:26.750585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.266 [2024-11-26 20:25:26.750705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.266 BaseBdev2 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.266 spare_malloc 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.266 spare_delay 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.266 [2024-11-26 20:25:26.783031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.266 [2024-11-26 20:25:26.783107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.266 [2024-11-26 20:25:26.783228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:33.266 [2024-11-26 20:25:26.783275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.266 [2024-11-26 20:25:26.785769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.266 [2024-11-26 20:25:26.785871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.266 spare 00:12:33.266 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.267 [2024-11-26 20:25:26.795080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.267 [2024-11-26 20:25:26.797286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.267 [2024-11-26 20:25:26.797480] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:33.267 [2024-11-26 20:25:26.797496] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.267 [2024-11-26 20:25:26.797834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:33.267 [2024-11-26 20:25:26.798006] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:33.267 [2024-11-26 20:25:26.798030] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:33.267 [2024-11-26 20:25:26.798230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.267 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.527 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.527 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.527 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.527 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.527 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.527 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.527 "name": "raid_bdev1", 00:12:33.527 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:33.527 "strip_size_kb": 0, 00:12:33.527 "state": "online", 00:12:33.527 "raid_level": "raid1", 00:12:33.527 "superblock": true, 00:12:33.527 "num_base_bdevs": 2, 00:12:33.527 "num_base_bdevs_discovered": 2, 00:12:33.527 "num_base_bdevs_operational": 2, 00:12:33.527 "base_bdevs_list": [ 00:12:33.527 { 00:12:33.527 "name": "BaseBdev1", 00:12:33.527 "uuid": "f09eebe9-d6a2-5852-b78f-734c6fadddee", 00:12:33.527 "is_configured": true, 00:12:33.527 "data_offset": 2048, 00:12:33.527 "data_size": 63488 00:12:33.527 }, 00:12:33.527 { 00:12:33.527 "name": "BaseBdev2", 00:12:33.527 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:33.527 "is_configured": true, 00:12:33.527 "data_offset": 2048, 00:12:33.527 "data_size": 63488 00:12:33.527 } 00:12:33.528 ] 00:12:33.528 }' 00:12:33.528 20:25:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.528 20:25:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.787 [2024-11-26 20:25:27.218635] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:33.787 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.788 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:34.047 [2024-11-26 20:25:27.505918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:34.047 /dev/nbd0 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.047 1+0 records in 00:12:34.047 1+0 records out 00:12:34.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511537 s, 8.0 MB/s 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:34.047 20:25:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:38.254 63488+0 records in 00:12:38.254 63488+0 records out 00:12:38.254 32505856 bytes (33 MB, 31 MiB) copied, 4.19191 s, 7.8 MB/s 00:12:38.254 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:38.254 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.254 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:38.254 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.254 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:38.254 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.255 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:38.514 [2024-11-26 20:25:31.975599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.514 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.514 20:25:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.514 [2024-11-26 20:25:32.011649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.514 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.773 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.773 "name": "raid_bdev1", 00:12:38.773 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:38.773 "strip_size_kb": 0, 00:12:38.773 "state": "online", 00:12:38.773 "raid_level": "raid1", 00:12:38.773 "superblock": true, 00:12:38.773 "num_base_bdevs": 2, 00:12:38.773 "num_base_bdevs_discovered": 1, 00:12:38.773 "num_base_bdevs_operational": 1, 00:12:38.773 "base_bdevs_list": [ 00:12:38.773 { 00:12:38.773 "name": null, 00:12:38.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.773 "is_configured": false, 00:12:38.773 "data_offset": 0, 00:12:38.773 "data_size": 63488 00:12:38.773 }, 00:12:38.773 { 00:12:38.773 "name": "BaseBdev2", 00:12:38.773 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:38.773 "is_configured": true, 00:12:38.773 "data_offset": 2048, 00:12:38.773 "data_size": 63488 00:12:38.773 } 00:12:38.773 ] 00:12:38.773 }' 00:12:38.773 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.773 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.033 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.033 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.033 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.033 [2024-11-26 20:25:32.474851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.033 [2024-11-26 20:25:32.480469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:12:39.033 20:25:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.033 20:25:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:39.033 [2024-11-26 20:25:32.482593] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.969 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.969 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.969 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.969 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.969 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.970 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.970 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.970 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.970 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.970 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.229 "name": "raid_bdev1", 00:12:40.229 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:40.229 "strip_size_kb": 0, 00:12:40.229 "state": "online", 00:12:40.229 "raid_level": "raid1", 00:12:40.229 "superblock": true, 00:12:40.229 "num_base_bdevs": 2, 00:12:40.229 "num_base_bdevs_discovered": 2, 00:12:40.229 "num_base_bdevs_operational": 2, 00:12:40.229 "process": { 00:12:40.229 "type": "rebuild", 00:12:40.229 "target": "spare", 00:12:40.229 "progress": { 00:12:40.229 "blocks": 20480, 00:12:40.229 "percent": 32 00:12:40.229 } 00:12:40.229 }, 00:12:40.229 "base_bdevs_list": [ 00:12:40.229 { 00:12:40.229 "name": "spare", 00:12:40.229 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:40.229 "is_configured": true, 00:12:40.229 "data_offset": 2048, 00:12:40.229 "data_size": 63488 00:12:40.229 }, 00:12:40.229 { 00:12:40.229 "name": "BaseBdev2", 00:12:40.229 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:40.229 "is_configured": true, 00:12:40.229 "data_offset": 2048, 00:12:40.229 "data_size": 63488 00:12:40.229 } 00:12:40.229 ] 00:12:40.229 }' 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.229 [2024-11-26 20:25:33.647480] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.229 [2024-11-26 20:25:33.691239] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.229 [2024-11-26 20:25:33.691320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.229 [2024-11-26 20:25:33.691341] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.229 [2024-11-26 20:25:33.691362] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.229 "name": "raid_bdev1", 00:12:40.229 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:40.229 "strip_size_kb": 0, 00:12:40.229 "state": "online", 00:12:40.229 "raid_level": "raid1", 00:12:40.229 "superblock": true, 00:12:40.229 "num_base_bdevs": 2, 00:12:40.229 "num_base_bdevs_discovered": 1, 00:12:40.229 "num_base_bdevs_operational": 1, 00:12:40.229 "base_bdevs_list": [ 00:12:40.229 { 00:12:40.229 "name": null, 00:12:40.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.229 "is_configured": false, 00:12:40.229 "data_offset": 0, 00:12:40.229 "data_size": 63488 00:12:40.229 }, 00:12:40.229 { 00:12:40.229 "name": "BaseBdev2", 00:12:40.229 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:40.229 "is_configured": true, 00:12:40.229 "data_offset": 2048, 00:12:40.229 "data_size": 63488 00:12:40.229 } 00:12:40.229 ] 00:12:40.229 }' 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.229 20:25:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.798 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.798 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.798 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.798 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.798 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.798 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.798 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.799 "name": "raid_bdev1", 00:12:40.799 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:40.799 "strip_size_kb": 0, 00:12:40.799 "state": "online", 00:12:40.799 "raid_level": "raid1", 00:12:40.799 "superblock": true, 00:12:40.799 "num_base_bdevs": 2, 00:12:40.799 "num_base_bdevs_discovered": 1, 00:12:40.799 "num_base_bdevs_operational": 1, 00:12:40.799 "base_bdevs_list": [ 00:12:40.799 { 00:12:40.799 "name": null, 00:12:40.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.799 "is_configured": false, 00:12:40.799 "data_offset": 0, 00:12:40.799 "data_size": 63488 00:12:40.799 }, 00:12:40.799 { 00:12:40.799 "name": "BaseBdev2", 00:12:40.799 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:40.799 "is_configured": true, 00:12:40.799 "data_offset": 2048, 00:12:40.799 "data_size": 63488 00:12:40.799 } 00:12:40.799 ] 00:12:40.799 }' 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.799 [2024-11-26 20:25:34.305325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.799 [2024-11-26 20:25:34.311178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.799 20:25:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:40.799 [2024-11-26 20:25:34.313304] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.176 "name": "raid_bdev1", 00:12:42.176 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:42.176 "strip_size_kb": 0, 00:12:42.176 "state": "online", 00:12:42.176 "raid_level": "raid1", 00:12:42.176 "superblock": true, 00:12:42.176 "num_base_bdevs": 2, 00:12:42.176 "num_base_bdevs_discovered": 2, 00:12:42.176 "num_base_bdevs_operational": 2, 00:12:42.176 "process": { 00:12:42.176 "type": "rebuild", 00:12:42.176 "target": "spare", 00:12:42.176 "progress": { 00:12:42.176 "blocks": 20480, 00:12:42.176 "percent": 32 00:12:42.176 } 00:12:42.176 }, 00:12:42.176 "base_bdevs_list": [ 00:12:42.176 { 00:12:42.176 "name": "spare", 00:12:42.176 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:42.176 "is_configured": true, 00:12:42.176 "data_offset": 2048, 00:12:42.176 "data_size": 63488 00:12:42.176 }, 00:12:42.176 { 00:12:42.176 "name": "BaseBdev2", 00:12:42.176 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:42.176 "is_configured": true, 00:12:42.176 "data_offset": 2048, 00:12:42.176 "data_size": 63488 00:12:42.176 } 00:12:42.176 ] 00:12:42.176 }' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:42.176 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=324 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.176 "name": "raid_bdev1", 00:12:42.176 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:42.176 "strip_size_kb": 0, 00:12:42.176 "state": "online", 00:12:42.176 "raid_level": "raid1", 00:12:42.176 "superblock": true, 00:12:42.176 "num_base_bdevs": 2, 00:12:42.176 "num_base_bdevs_discovered": 2, 00:12:42.176 "num_base_bdevs_operational": 2, 00:12:42.176 "process": { 00:12:42.176 "type": "rebuild", 00:12:42.176 "target": "spare", 00:12:42.176 "progress": { 00:12:42.176 "blocks": 22528, 00:12:42.176 "percent": 35 00:12:42.176 } 00:12:42.176 }, 00:12:42.176 "base_bdevs_list": [ 00:12:42.176 { 00:12:42.176 "name": "spare", 00:12:42.176 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:42.176 "is_configured": true, 00:12:42.176 "data_offset": 2048, 00:12:42.176 "data_size": 63488 00:12:42.176 }, 00:12:42.176 { 00:12:42.176 "name": "BaseBdev2", 00:12:42.176 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:42.176 "is_configured": true, 00:12:42.176 "data_offset": 2048, 00:12:42.176 "data_size": 63488 00:12:42.176 } 00:12:42.176 ] 00:12:42.176 }' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.176 20:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.113 20:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.373 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.373 "name": "raid_bdev1", 00:12:43.373 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:43.373 "strip_size_kb": 0, 00:12:43.373 "state": "online", 00:12:43.373 "raid_level": "raid1", 00:12:43.373 "superblock": true, 00:12:43.373 "num_base_bdevs": 2, 00:12:43.373 "num_base_bdevs_discovered": 2, 00:12:43.373 "num_base_bdevs_operational": 2, 00:12:43.373 "process": { 00:12:43.373 "type": "rebuild", 00:12:43.373 "target": "spare", 00:12:43.373 "progress": { 00:12:43.373 "blocks": 45056, 00:12:43.373 "percent": 70 00:12:43.373 } 00:12:43.373 }, 00:12:43.373 "base_bdevs_list": [ 00:12:43.373 { 00:12:43.373 "name": "spare", 00:12:43.373 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:43.373 "is_configured": true, 00:12:43.373 "data_offset": 2048, 00:12:43.373 "data_size": 63488 00:12:43.373 }, 00:12:43.373 { 00:12:43.373 "name": "BaseBdev2", 00:12:43.373 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:43.373 "is_configured": true, 00:12:43.373 "data_offset": 2048, 00:12:43.373 "data_size": 63488 00:12:43.373 } 00:12:43.373 ] 00:12:43.373 }' 00:12:43.373 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.373 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.373 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.373 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.373 20:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.940 [2024-11-26 20:25:37.433761] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:43.940 [2024-11-26 20:25:37.433860] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:43.940 [2024-11-26 20:25:37.434009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.507 "name": "raid_bdev1", 00:12:44.507 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:44.507 "strip_size_kb": 0, 00:12:44.507 "state": "online", 00:12:44.507 "raid_level": "raid1", 00:12:44.507 "superblock": true, 00:12:44.507 "num_base_bdevs": 2, 00:12:44.507 "num_base_bdevs_discovered": 2, 00:12:44.507 "num_base_bdevs_operational": 2, 00:12:44.507 "base_bdevs_list": [ 00:12:44.507 { 00:12:44.507 "name": "spare", 00:12:44.507 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:44.507 "is_configured": true, 00:12:44.507 "data_offset": 2048, 00:12:44.507 "data_size": 63488 00:12:44.507 }, 00:12:44.507 { 00:12:44.507 "name": "BaseBdev2", 00:12:44.507 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:44.507 "is_configured": true, 00:12:44.507 "data_offset": 2048, 00:12:44.507 "data_size": 63488 00:12:44.507 } 00:12:44.507 ] 00:12:44.507 }' 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.507 "name": "raid_bdev1", 00:12:44.507 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:44.507 "strip_size_kb": 0, 00:12:44.507 "state": "online", 00:12:44.507 "raid_level": "raid1", 00:12:44.507 "superblock": true, 00:12:44.507 "num_base_bdevs": 2, 00:12:44.507 "num_base_bdevs_discovered": 2, 00:12:44.507 "num_base_bdevs_operational": 2, 00:12:44.507 "base_bdevs_list": [ 00:12:44.507 { 00:12:44.507 "name": "spare", 00:12:44.507 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:44.507 "is_configured": true, 00:12:44.507 "data_offset": 2048, 00:12:44.507 "data_size": 63488 00:12:44.507 }, 00:12:44.507 { 00:12:44.507 "name": "BaseBdev2", 00:12:44.507 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:44.507 "is_configured": true, 00:12:44.507 "data_offset": 2048, 00:12:44.507 "data_size": 63488 00:12:44.507 } 00:12:44.507 ] 00:12:44.507 }' 00:12:44.507 20:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.507 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.507 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.507 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.507 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:44.507 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.507 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.507 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.507 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.508 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.508 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.508 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.508 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.508 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.766 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.766 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.766 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.766 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.766 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.766 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.766 "name": "raid_bdev1", 00:12:44.766 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:44.766 "strip_size_kb": 0, 00:12:44.766 "state": "online", 00:12:44.766 "raid_level": "raid1", 00:12:44.766 "superblock": true, 00:12:44.766 "num_base_bdevs": 2, 00:12:44.766 "num_base_bdevs_discovered": 2, 00:12:44.766 "num_base_bdevs_operational": 2, 00:12:44.766 "base_bdevs_list": [ 00:12:44.766 { 00:12:44.766 "name": "spare", 00:12:44.766 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:44.766 "is_configured": true, 00:12:44.766 "data_offset": 2048, 00:12:44.766 "data_size": 63488 00:12:44.766 }, 00:12:44.766 { 00:12:44.766 "name": "BaseBdev2", 00:12:44.766 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:44.766 "is_configured": true, 00:12:44.766 "data_offset": 2048, 00:12:44.766 "data_size": 63488 00:12:44.766 } 00:12:44.766 ] 00:12:44.766 }' 00:12:44.766 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.766 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.026 [2024-11-26 20:25:38.486528] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.026 [2024-11-26 20:25:38.486650] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.026 [2024-11-26 20:25:38.486775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.026 [2024-11-26 20:25:38.486872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.026 [2024-11-26 20:25:38.486925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:45.026 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.027 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.027 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:45.284 /dev/nbd0 00:12:45.284 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:45.284 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:45.284 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:45.284 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:45.284 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:45.284 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:45.284 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:45.284 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.285 1+0 records in 00:12:45.285 1+0 records out 00:12:45.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037384 s, 11.0 MB/s 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.285 20:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:45.542 /dev/nbd1 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.542 1+0 records in 00:12:45.542 1+0 records out 00:12:45.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646182 s, 6.3 MB/s 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.542 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:45.543 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.543 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:45.543 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:45.543 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.543 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.543 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:45.808 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:45.808 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.808 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.808 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.808 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:45.809 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.809 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.068 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.327 [2024-11-26 20:25:39.620053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:46.327 [2024-11-26 20:25:39.620178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.327 [2024-11-26 20:25:39.620227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:46.327 [2024-11-26 20:25:39.620267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.327 [2024-11-26 20:25:39.622831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.327 [2024-11-26 20:25:39.622917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:46.327 [2024-11-26 20:25:39.623037] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:46.327 [2024-11-26 20:25:39.623134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.327 [2024-11-26 20:25:39.623307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.327 spare 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.327 [2024-11-26 20:25:39.723273] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:46.327 [2024-11-26 20:25:39.723415] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:46.327 [2024-11-26 20:25:39.723844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:12:46.327 [2024-11-26 20:25:39.724085] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:46.327 [2024-11-26 20:25:39.724140] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:46.327 [2024-11-26 20:25:39.724367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.327 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.328 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.328 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.328 "name": "raid_bdev1", 00:12:46.328 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:46.328 "strip_size_kb": 0, 00:12:46.328 "state": "online", 00:12:46.328 "raid_level": "raid1", 00:12:46.328 "superblock": true, 00:12:46.328 "num_base_bdevs": 2, 00:12:46.328 "num_base_bdevs_discovered": 2, 00:12:46.328 "num_base_bdevs_operational": 2, 00:12:46.328 "base_bdevs_list": [ 00:12:46.328 { 00:12:46.328 "name": "spare", 00:12:46.328 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:46.328 "is_configured": true, 00:12:46.328 "data_offset": 2048, 00:12:46.328 "data_size": 63488 00:12:46.328 }, 00:12:46.328 { 00:12:46.328 "name": "BaseBdev2", 00:12:46.328 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:46.328 "is_configured": true, 00:12:46.328 "data_offset": 2048, 00:12:46.328 "data_size": 63488 00:12:46.328 } 00:12:46.328 ] 00:12:46.328 }' 00:12:46.328 20:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.328 20:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.896 "name": "raid_bdev1", 00:12:46.896 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:46.896 "strip_size_kb": 0, 00:12:46.896 "state": "online", 00:12:46.896 "raid_level": "raid1", 00:12:46.896 "superblock": true, 00:12:46.896 "num_base_bdevs": 2, 00:12:46.896 "num_base_bdevs_discovered": 2, 00:12:46.896 "num_base_bdevs_operational": 2, 00:12:46.896 "base_bdevs_list": [ 00:12:46.896 { 00:12:46.896 "name": "spare", 00:12:46.896 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:46.896 "is_configured": true, 00:12:46.896 "data_offset": 2048, 00:12:46.896 "data_size": 63488 00:12:46.896 }, 00:12:46.896 { 00:12:46.896 "name": "BaseBdev2", 00:12:46.896 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:46.896 "is_configured": true, 00:12:46.896 "data_offset": 2048, 00:12:46.896 "data_size": 63488 00:12:46.896 } 00:12:46.896 ] 00:12:46.896 }' 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.896 [2024-11-26 20:25:40.431272] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.896 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.156 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.156 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.156 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.156 "name": "raid_bdev1", 00:12:47.156 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:47.156 "strip_size_kb": 0, 00:12:47.156 "state": "online", 00:12:47.156 "raid_level": "raid1", 00:12:47.156 "superblock": true, 00:12:47.156 "num_base_bdevs": 2, 00:12:47.156 "num_base_bdevs_discovered": 1, 00:12:47.156 "num_base_bdevs_operational": 1, 00:12:47.156 "base_bdevs_list": [ 00:12:47.156 { 00:12:47.156 "name": null, 00:12:47.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.156 "is_configured": false, 00:12:47.156 "data_offset": 0, 00:12:47.156 "data_size": 63488 00:12:47.156 }, 00:12:47.156 { 00:12:47.156 "name": "BaseBdev2", 00:12:47.156 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:47.156 "is_configured": true, 00:12:47.156 "data_offset": 2048, 00:12:47.156 "data_size": 63488 00:12:47.156 } 00:12:47.156 ] 00:12:47.156 }' 00:12:47.156 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.156 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.415 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.415 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.415 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.415 [2024-11-26 20:25:40.926462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.415 [2024-11-26 20:25:40.926742] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:47.415 [2024-11-26 20:25:40.926810] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:47.415 [2024-11-26 20:25:40.926904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.415 [2024-11-26 20:25:40.932587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:12:47.415 20:25:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.415 20:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:47.415 [2024-11-26 20:25:40.934849] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.795 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.795 "name": "raid_bdev1", 00:12:48.795 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:48.795 "strip_size_kb": 0, 00:12:48.795 "state": "online", 00:12:48.796 "raid_level": "raid1", 00:12:48.796 "superblock": true, 00:12:48.796 "num_base_bdevs": 2, 00:12:48.796 "num_base_bdevs_discovered": 2, 00:12:48.796 "num_base_bdevs_operational": 2, 00:12:48.796 "process": { 00:12:48.796 "type": "rebuild", 00:12:48.796 "target": "spare", 00:12:48.796 "progress": { 00:12:48.796 "blocks": 20480, 00:12:48.796 "percent": 32 00:12:48.796 } 00:12:48.796 }, 00:12:48.796 "base_bdevs_list": [ 00:12:48.796 { 00:12:48.796 "name": "spare", 00:12:48.796 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:48.796 "is_configured": true, 00:12:48.796 "data_offset": 2048, 00:12:48.796 "data_size": 63488 00:12:48.796 }, 00:12:48.796 { 00:12:48.796 "name": "BaseBdev2", 00:12:48.796 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:48.796 "is_configured": true, 00:12:48.796 "data_offset": 2048, 00:12:48.796 "data_size": 63488 00:12:48.796 } 00:12:48.796 ] 00:12:48.796 }' 00:12:48.796 20:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.796 [2024-11-26 20:25:42.082451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.796 [2024-11-26 20:25:42.142959] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:48.796 [2024-11-26 20:25:42.143045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.796 [2024-11-26 20:25:42.143064] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.796 [2024-11-26 20:25:42.143073] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.796 "name": "raid_bdev1", 00:12:48.796 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:48.796 "strip_size_kb": 0, 00:12:48.796 "state": "online", 00:12:48.796 "raid_level": "raid1", 00:12:48.796 "superblock": true, 00:12:48.796 "num_base_bdevs": 2, 00:12:48.796 "num_base_bdevs_discovered": 1, 00:12:48.796 "num_base_bdevs_operational": 1, 00:12:48.796 "base_bdevs_list": [ 00:12:48.796 { 00:12:48.796 "name": null, 00:12:48.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.796 "is_configured": false, 00:12:48.796 "data_offset": 0, 00:12:48.796 "data_size": 63488 00:12:48.796 }, 00:12:48.796 { 00:12:48.796 "name": "BaseBdev2", 00:12:48.796 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:48.796 "is_configured": true, 00:12:48.796 "data_offset": 2048, 00:12:48.796 "data_size": 63488 00:12:48.796 } 00:12:48.796 ] 00:12:48.796 }' 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.796 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.365 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:49.366 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.366 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.366 [2024-11-26 20:25:42.644696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:49.366 [2024-11-26 20:25:42.644814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.366 [2024-11-26 20:25:42.644864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:49.366 [2024-11-26 20:25:42.644913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.366 [2024-11-26 20:25:42.645462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.366 [2024-11-26 20:25:42.645528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:49.366 [2024-11-26 20:25:42.645682] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:49.366 [2024-11-26 20:25:42.645731] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:49.366 [2024-11-26 20:25:42.645792] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:49.366 [2024-11-26 20:25:42.645840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.366 [2024-11-26 20:25:42.651497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:49.366 spare 00:12:49.366 20:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.366 20:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:49.366 [2024-11-26 20:25:42.653790] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.304 "name": "raid_bdev1", 00:12:50.304 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:50.304 "strip_size_kb": 0, 00:12:50.304 "state": "online", 00:12:50.304 "raid_level": "raid1", 00:12:50.304 "superblock": true, 00:12:50.304 "num_base_bdevs": 2, 00:12:50.304 "num_base_bdevs_discovered": 2, 00:12:50.304 "num_base_bdevs_operational": 2, 00:12:50.304 "process": { 00:12:50.304 "type": "rebuild", 00:12:50.304 "target": "spare", 00:12:50.304 "progress": { 00:12:50.304 "blocks": 20480, 00:12:50.304 "percent": 32 00:12:50.304 } 00:12:50.304 }, 00:12:50.304 "base_bdevs_list": [ 00:12:50.304 { 00:12:50.304 "name": "spare", 00:12:50.304 "uuid": "13c7e670-d597-51ff-bae7-b97689028752", 00:12:50.304 "is_configured": true, 00:12:50.304 "data_offset": 2048, 00:12:50.304 "data_size": 63488 00:12:50.304 }, 00:12:50.304 { 00:12:50.304 "name": "BaseBdev2", 00:12:50.304 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:50.304 "is_configured": true, 00:12:50.304 "data_offset": 2048, 00:12:50.304 "data_size": 63488 00:12:50.304 } 00:12:50.304 ] 00:12:50.304 }' 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.304 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.304 [2024-11-26 20:25:43.813425] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.564 [2024-11-26 20:25:43.862079] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:50.564 [2024-11-26 20:25:43.862251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.564 [2024-11-26 20:25:43.862298] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.564 [2024-11-26 20:25:43.862326] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.564 "name": "raid_bdev1", 00:12:50.564 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:50.564 "strip_size_kb": 0, 00:12:50.564 "state": "online", 00:12:50.564 "raid_level": "raid1", 00:12:50.564 "superblock": true, 00:12:50.564 "num_base_bdevs": 2, 00:12:50.564 "num_base_bdevs_discovered": 1, 00:12:50.564 "num_base_bdevs_operational": 1, 00:12:50.564 "base_bdevs_list": [ 00:12:50.564 { 00:12:50.564 "name": null, 00:12:50.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.564 "is_configured": false, 00:12:50.564 "data_offset": 0, 00:12:50.564 "data_size": 63488 00:12:50.564 }, 00:12:50.564 { 00:12:50.564 "name": "BaseBdev2", 00:12:50.564 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:50.564 "is_configured": true, 00:12:50.564 "data_offset": 2048, 00:12:50.564 "data_size": 63488 00:12:50.564 } 00:12:50.564 ] 00:12:50.564 }' 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.564 20:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.824 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.084 "name": "raid_bdev1", 00:12:51.084 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:51.084 "strip_size_kb": 0, 00:12:51.084 "state": "online", 00:12:51.084 "raid_level": "raid1", 00:12:51.084 "superblock": true, 00:12:51.084 "num_base_bdevs": 2, 00:12:51.084 "num_base_bdevs_discovered": 1, 00:12:51.084 "num_base_bdevs_operational": 1, 00:12:51.084 "base_bdevs_list": [ 00:12:51.084 { 00:12:51.084 "name": null, 00:12:51.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.084 "is_configured": false, 00:12:51.084 "data_offset": 0, 00:12:51.084 "data_size": 63488 00:12:51.084 }, 00:12:51.084 { 00:12:51.084 "name": "BaseBdev2", 00:12:51.084 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:51.084 "is_configured": true, 00:12:51.084 "data_offset": 2048, 00:12:51.084 "data_size": 63488 00:12:51.084 } 00:12:51.084 ] 00:12:51.084 }' 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.084 [2024-11-26 20:25:44.488053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:51.084 [2024-11-26 20:25:44.488128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.084 [2024-11-26 20:25:44.488150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:51.084 [2024-11-26 20:25:44.488161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.084 [2024-11-26 20:25:44.488569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.084 [2024-11-26 20:25:44.488598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.084 [2024-11-26 20:25:44.488702] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:51.084 [2024-11-26 20:25:44.488721] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:51.084 [2024-11-26 20:25:44.488739] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:51.084 [2024-11-26 20:25:44.488755] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:51.084 BaseBdev1 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.084 20:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.054 "name": "raid_bdev1", 00:12:52.054 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:52.054 "strip_size_kb": 0, 00:12:52.054 "state": "online", 00:12:52.054 "raid_level": "raid1", 00:12:52.054 "superblock": true, 00:12:52.054 "num_base_bdevs": 2, 00:12:52.054 "num_base_bdevs_discovered": 1, 00:12:52.054 "num_base_bdevs_operational": 1, 00:12:52.054 "base_bdevs_list": [ 00:12:52.054 { 00:12:52.054 "name": null, 00:12:52.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.054 "is_configured": false, 00:12:52.054 "data_offset": 0, 00:12:52.054 "data_size": 63488 00:12:52.054 }, 00:12:52.054 { 00:12:52.054 "name": "BaseBdev2", 00:12:52.054 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:52.054 "is_configured": true, 00:12:52.054 "data_offset": 2048, 00:12:52.054 "data_size": 63488 00:12:52.054 } 00:12:52.054 ] 00:12:52.054 }' 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.054 20:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.622 20:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.622 "name": "raid_bdev1", 00:12:52.622 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:52.622 "strip_size_kb": 0, 00:12:52.622 "state": "online", 00:12:52.622 "raid_level": "raid1", 00:12:52.622 "superblock": true, 00:12:52.622 "num_base_bdevs": 2, 00:12:52.622 "num_base_bdevs_discovered": 1, 00:12:52.622 "num_base_bdevs_operational": 1, 00:12:52.622 "base_bdevs_list": [ 00:12:52.622 { 00:12:52.622 "name": null, 00:12:52.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.622 "is_configured": false, 00:12:52.622 "data_offset": 0, 00:12:52.622 "data_size": 63488 00:12:52.622 }, 00:12:52.622 { 00:12:52.622 "name": "BaseBdev2", 00:12:52.622 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:52.622 "is_configured": true, 00:12:52.622 "data_offset": 2048, 00:12:52.622 "data_size": 63488 00:12:52.622 } 00:12:52.622 ] 00:12:52.622 }' 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.622 [2024-11-26 20:25:46.137319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.622 [2024-11-26 20:25:46.137505] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:52.622 [2024-11-26 20:25:46.137524] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:52.622 request: 00:12:52.622 { 00:12:52.622 "base_bdev": "BaseBdev1", 00:12:52.622 "raid_bdev": "raid_bdev1", 00:12:52.622 "method": "bdev_raid_add_base_bdev", 00:12:52.622 "req_id": 1 00:12:52.622 } 00:12:52.622 Got JSON-RPC error response 00:12:52.622 response: 00:12:52.622 { 00:12:52.622 "code": -22, 00:12:52.622 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:52.622 } 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.622 20:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.000 "name": "raid_bdev1", 00:12:54.000 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:54.000 "strip_size_kb": 0, 00:12:54.000 "state": "online", 00:12:54.000 "raid_level": "raid1", 00:12:54.000 "superblock": true, 00:12:54.000 "num_base_bdevs": 2, 00:12:54.000 "num_base_bdevs_discovered": 1, 00:12:54.000 "num_base_bdevs_operational": 1, 00:12:54.000 "base_bdevs_list": [ 00:12:54.000 { 00:12:54.000 "name": null, 00:12:54.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.000 "is_configured": false, 00:12:54.000 "data_offset": 0, 00:12:54.000 "data_size": 63488 00:12:54.000 }, 00:12:54.000 { 00:12:54.000 "name": "BaseBdev2", 00:12:54.000 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:54.000 "is_configured": true, 00:12:54.000 "data_offset": 2048, 00:12:54.000 "data_size": 63488 00:12:54.000 } 00:12:54.000 ] 00:12:54.000 }' 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.000 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.259 "name": "raid_bdev1", 00:12:54.259 "uuid": "cfc9edc4-e914-4ccb-84ef-e6fe73bbbb33", 00:12:54.259 "strip_size_kb": 0, 00:12:54.259 "state": "online", 00:12:54.259 "raid_level": "raid1", 00:12:54.259 "superblock": true, 00:12:54.259 "num_base_bdevs": 2, 00:12:54.259 "num_base_bdevs_discovered": 1, 00:12:54.259 "num_base_bdevs_operational": 1, 00:12:54.259 "base_bdevs_list": [ 00:12:54.259 { 00:12:54.259 "name": null, 00:12:54.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.259 "is_configured": false, 00:12:54.259 "data_offset": 0, 00:12:54.259 "data_size": 63488 00:12:54.259 }, 00:12:54.259 { 00:12:54.259 "name": "BaseBdev2", 00:12:54.259 "uuid": "11a5140d-341e-5c57-ac9b-7ca167a08d95", 00:12:54.259 "is_configured": true, 00:12:54.259 "data_offset": 2048, 00:12:54.259 "data_size": 63488 00:12:54.259 } 00:12:54.259 ] 00:12:54.259 }' 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86952 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86952 ']' 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86952 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:54.259 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86952 00:12:54.259 killing process with pid 86952 00:12:54.259 Received shutdown signal, test time was about 60.000000 seconds 00:12:54.259 00:12:54.259 Latency(us) 00:12:54.259 [2024-11-26T20:25:47.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.259 [2024-11-26T20:25:47.811Z] =================================================================================================================== 00:12:54.259 [2024-11-26T20:25:47.811Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:54.260 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:54.260 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:54.260 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86952' 00:12:54.260 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86952 00:12:54.260 [2024-11-26 20:25:47.765361] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.260 [2024-11-26 20:25:47.765502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.260 20:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86952 00:12:54.260 [2024-11-26 20:25:47.765561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.260 [2024-11-26 20:25:47.765571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:54.518 [2024-11-26 20:25:47.818065] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:54.777 00:12:54.777 real 0m22.451s 00:12:54.777 user 0m27.634s 00:12:54.777 sys 0m3.837s 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.777 ************************************ 00:12:54.777 END TEST raid_rebuild_test_sb 00:12:54.777 ************************************ 00:12:54.777 20:25:48 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:54.777 20:25:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:54.777 20:25:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.777 20:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.777 ************************************ 00:12:54.777 START TEST raid_rebuild_test_io 00:12:54.777 ************************************ 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87674 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87674 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87674 ']' 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.777 20:25:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.037 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:55.037 Zero copy mechanism will not be used. 00:12:55.037 [2024-11-26 20:25:48.332627] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:55.037 [2024-11-26 20:25:48.332768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87674 ] 00:12:55.037 [2024-11-26 20:25:48.493898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.037 [2024-11-26 20:25:48.572474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.296 [2024-11-26 20:25:48.649301] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.296 [2024-11-26 20:25:48.649341] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.863 BaseBdev1_malloc 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.863 [2024-11-26 20:25:49.218829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:55.863 [2024-11-26 20:25:49.218896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.863 [2024-11-26 20:25:49.218937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:55.863 [2024-11-26 20:25:49.218963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.863 [2024-11-26 20:25:49.221128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.863 [2024-11-26 20:25:49.221169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:55.863 BaseBdev1 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.863 BaseBdev2_malloc 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.863 [2024-11-26 20:25:49.257898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:55.863 [2024-11-26 20:25:49.257956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.863 [2024-11-26 20:25:49.257991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:55.863 [2024-11-26 20:25:49.257999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.863 [2024-11-26 20:25:49.260092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.863 [2024-11-26 20:25:49.260140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:55.863 BaseBdev2 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.863 spare_malloc 00:12:55.863 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.864 spare_delay 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.864 [2024-11-26 20:25:49.304493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:55.864 [2024-11-26 20:25:49.304554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.864 [2024-11-26 20:25:49.304593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:55.864 [2024-11-26 20:25:49.304601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.864 [2024-11-26 20:25:49.306887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.864 [2024-11-26 20:25:49.306922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:55.864 spare 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.864 [2024-11-26 20:25:49.312549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.864 [2024-11-26 20:25:49.314582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.864 [2024-11-26 20:25:49.314686] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:55.864 [2024-11-26 20:25:49.314707] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:55.864 [2024-11-26 20:25:49.314962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:55.864 [2024-11-26 20:25:49.315089] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:55.864 [2024-11-26 20:25:49.315106] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:55.864 [2024-11-26 20:25:49.315233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.864 "name": "raid_bdev1", 00:12:55.864 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:12:55.864 "strip_size_kb": 0, 00:12:55.864 "state": "online", 00:12:55.864 "raid_level": "raid1", 00:12:55.864 "superblock": false, 00:12:55.864 "num_base_bdevs": 2, 00:12:55.864 "num_base_bdevs_discovered": 2, 00:12:55.864 "num_base_bdevs_operational": 2, 00:12:55.864 "base_bdevs_list": [ 00:12:55.864 { 00:12:55.864 "name": "BaseBdev1", 00:12:55.864 "uuid": "9145c1c0-9a61-50e3-8d5d-c102b0ffa9df", 00:12:55.864 "is_configured": true, 00:12:55.864 "data_offset": 0, 00:12:55.864 "data_size": 65536 00:12:55.864 }, 00:12:55.864 { 00:12:55.864 "name": "BaseBdev2", 00:12:55.864 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:12:55.864 "is_configured": true, 00:12:55.864 "data_offset": 0, 00:12:55.864 "data_size": 65536 00:12:55.864 } 00:12:55.864 ] 00:12:55.864 }' 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.864 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:56.431 [2024-11-26 20:25:49.752101] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.431 [2024-11-26 20:25:49.827691] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.431 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.432 "name": "raid_bdev1", 00:12:56.432 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:12:56.432 "strip_size_kb": 0, 00:12:56.432 "state": "online", 00:12:56.432 "raid_level": "raid1", 00:12:56.432 "superblock": false, 00:12:56.432 "num_base_bdevs": 2, 00:12:56.432 "num_base_bdevs_discovered": 1, 00:12:56.432 "num_base_bdevs_operational": 1, 00:12:56.432 "base_bdevs_list": [ 00:12:56.432 { 00:12:56.432 "name": null, 00:12:56.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.432 "is_configured": false, 00:12:56.432 "data_offset": 0, 00:12:56.432 "data_size": 65536 00:12:56.432 }, 00:12:56.432 { 00:12:56.432 "name": "BaseBdev2", 00:12:56.432 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:12:56.432 "is_configured": true, 00:12:56.432 "data_offset": 0, 00:12:56.432 "data_size": 65536 00:12:56.432 } 00:12:56.432 ] 00:12:56.432 }' 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.432 20:25:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 [2024-11-26 20:25:49.930367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:56.432 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:56.432 Zero copy mechanism will not be used. 00:12:56.432 Running I/O for 60 seconds... 00:12:56.698 20:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:56.698 20:25:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.698 20:25:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.698 [2024-11-26 20:25:50.213711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.698 20:25:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.698 20:25:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:56.969 [2024-11-26 20:25:50.246929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:56.969 [2024-11-26 20:25:50.249171] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.969 [2024-11-26 20:25:50.368987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:56.969 [2024-11-26 20:25:50.511985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:56.969 [2024-11-26 20:25:50.512316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.539 [2024-11-26 20:25:50.843751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:57.539 158.00 IOPS, 474.00 MiB/s [2024-11-26T20:25:51.091Z] [2024-11-26 20:25:51.060422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:57.539 [2024-11-26 20:25:51.060791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.799 "name": "raid_bdev1", 00:12:57.799 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:12:57.799 "strip_size_kb": 0, 00:12:57.799 "state": "online", 00:12:57.799 "raid_level": "raid1", 00:12:57.799 "superblock": false, 00:12:57.799 "num_base_bdevs": 2, 00:12:57.799 "num_base_bdevs_discovered": 2, 00:12:57.799 "num_base_bdevs_operational": 2, 00:12:57.799 "process": { 00:12:57.799 "type": "rebuild", 00:12:57.799 "target": "spare", 00:12:57.799 "progress": { 00:12:57.799 "blocks": 10240, 00:12:57.799 "percent": 15 00:12:57.799 } 00:12:57.799 }, 00:12:57.799 "base_bdevs_list": [ 00:12:57.799 { 00:12:57.799 "name": "spare", 00:12:57.799 "uuid": "c0dbfec2-d2b1-55b8-a1c6-1894788754e0", 00:12:57.799 "is_configured": true, 00:12:57.799 "data_offset": 0, 00:12:57.799 "data_size": 65536 00:12:57.799 }, 00:12:57.799 { 00:12:57.799 "name": "BaseBdev2", 00:12:57.799 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:12:57.799 "is_configured": true, 00:12:57.799 "data_offset": 0, 00:12:57.799 "data_size": 65536 00:12:57.799 } 00:12:57.799 ] 00:12:57.799 }' 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.799 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.059 [2024-11-26 20:25:51.388374] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.059 [2024-11-26 20:25:51.404334] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:58.059 [2024-11-26 20:25:51.406969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.059 [2024-11-26 20:25:51.407017] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.059 [2024-11-26 20:25:51.407058] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:58.059 [2024-11-26 20:25:51.420863] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.059 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.059 "name": "raid_bdev1", 00:12:58.059 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:12:58.059 "strip_size_kb": 0, 00:12:58.059 "state": "online", 00:12:58.059 "raid_level": "raid1", 00:12:58.059 "superblock": false, 00:12:58.059 "num_base_bdevs": 2, 00:12:58.059 "num_base_bdevs_discovered": 1, 00:12:58.059 "num_base_bdevs_operational": 1, 00:12:58.059 "base_bdevs_list": [ 00:12:58.059 { 00:12:58.059 "name": null, 00:12:58.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.059 "is_configured": false, 00:12:58.059 "data_offset": 0, 00:12:58.059 "data_size": 65536 00:12:58.059 }, 00:12:58.059 { 00:12:58.060 "name": "BaseBdev2", 00:12:58.060 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:12:58.060 "is_configured": true, 00:12:58.060 "data_offset": 0, 00:12:58.060 "data_size": 65536 00:12:58.060 } 00:12:58.060 ] 00:12:58.060 }' 00:12:58.060 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.060 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.629 163.00 IOPS, 489.00 MiB/s [2024-11-26T20:25:52.181Z] 20:25:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.629 "name": "raid_bdev1", 00:12:58.629 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:12:58.629 "strip_size_kb": 0, 00:12:58.629 "state": "online", 00:12:58.629 "raid_level": "raid1", 00:12:58.629 "superblock": false, 00:12:58.629 "num_base_bdevs": 2, 00:12:58.629 "num_base_bdevs_discovered": 1, 00:12:58.629 "num_base_bdevs_operational": 1, 00:12:58.629 "base_bdevs_list": [ 00:12:58.629 { 00:12:58.629 "name": null, 00:12:58.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.629 "is_configured": false, 00:12:58.629 "data_offset": 0, 00:12:58.629 "data_size": 65536 00:12:58.629 }, 00:12:58.629 { 00:12:58.629 "name": "BaseBdev2", 00:12:58.629 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:12:58.629 "is_configured": true, 00:12:58.629 "data_offset": 0, 00:12:58.629 "data_size": 65536 00:12:58.629 } 00:12:58.629 ] 00:12:58.629 }' 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.629 20:25:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.629 20:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.629 20:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:58.629 20:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.629 20:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.629 [2024-11-26 20:25:52.050380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.629 20:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.629 20:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:58.629 [2024-11-26 20:25:52.086492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:58.629 [2024-11-26 20:25:52.088549] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:58.889 [2024-11-26 20:25:52.201965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:58.889 [2024-11-26 20:25:52.202514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:58.889 [2024-11-26 20:25:52.337031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:58.889 [2024-11-26 20:25:52.337357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:59.149 [2024-11-26 20:25:52.575658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:59.667 165.67 IOPS, 497.00 MiB/s [2024-11-26T20:25:53.219Z] 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.667 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.667 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.667 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.668 "name": "raid_bdev1", 00:12:59.668 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:12:59.668 "strip_size_kb": 0, 00:12:59.668 "state": "online", 00:12:59.668 "raid_level": "raid1", 00:12:59.668 "superblock": false, 00:12:59.668 "num_base_bdevs": 2, 00:12:59.668 "num_base_bdevs_discovered": 2, 00:12:59.668 "num_base_bdevs_operational": 2, 00:12:59.668 "process": { 00:12:59.668 "type": "rebuild", 00:12:59.668 "target": "spare", 00:12:59.668 "progress": { 00:12:59.668 "blocks": 14336, 00:12:59.668 "percent": 21 00:12:59.668 } 00:12:59.668 }, 00:12:59.668 "base_bdevs_list": [ 00:12:59.668 { 00:12:59.668 "name": "spare", 00:12:59.668 "uuid": "c0dbfec2-d2b1-55b8-a1c6-1894788754e0", 00:12:59.668 "is_configured": true, 00:12:59.668 "data_offset": 0, 00:12:59.668 "data_size": 65536 00:12:59.668 }, 00:12:59.668 { 00:12:59.668 "name": "BaseBdev2", 00:12:59.668 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:12:59.668 "is_configured": true, 00:12:59.668 "data_offset": 0, 00:12:59.668 "data_size": 65536 00:12:59.668 } 00:12:59.668 ] 00:12:59.668 }' 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.668 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=342 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.928 "name": "raid_bdev1", 00:12:59.928 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:12:59.928 "strip_size_kb": 0, 00:12:59.928 "state": "online", 00:12:59.928 "raid_level": "raid1", 00:12:59.928 "superblock": false, 00:12:59.928 "num_base_bdevs": 2, 00:12:59.928 "num_base_bdevs_discovered": 2, 00:12:59.928 "num_base_bdevs_operational": 2, 00:12:59.928 "process": { 00:12:59.928 "type": "rebuild", 00:12:59.928 "target": "spare", 00:12:59.928 "progress": { 00:12:59.928 "blocks": 16384, 00:12:59.928 "percent": 25 00:12:59.928 } 00:12:59.928 }, 00:12:59.928 "base_bdevs_list": [ 00:12:59.928 { 00:12:59.928 "name": "spare", 00:12:59.928 "uuid": "c0dbfec2-d2b1-55b8-a1c6-1894788754e0", 00:12:59.928 "is_configured": true, 00:12:59.928 "data_offset": 0, 00:12:59.928 "data_size": 65536 00:12:59.928 }, 00:12:59.928 { 00:12:59.928 "name": "BaseBdev2", 00:12:59.928 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:12:59.928 "is_configured": true, 00:12:59.928 "data_offset": 0, 00:12:59.928 "data_size": 65536 00:12:59.928 } 00:12:59.928 ] 00:12:59.928 }' 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.928 20:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:00.188 [2024-11-26 20:25:53.513211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:00.454 [2024-11-26 20:25:53.830387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:00.719 141.50 IOPS, 424.50 MiB/s [2024-11-26T20:25:54.271Z] [2024-11-26 20:25:54.053456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:00.978 [2024-11-26 20:25:54.274642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.978 "name": "raid_bdev1", 00:13:00.978 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:13:00.978 "strip_size_kb": 0, 00:13:00.978 "state": "online", 00:13:00.978 "raid_level": "raid1", 00:13:00.978 "superblock": false, 00:13:00.978 "num_base_bdevs": 2, 00:13:00.978 "num_base_bdevs_discovered": 2, 00:13:00.978 "num_base_bdevs_operational": 2, 00:13:00.978 "process": { 00:13:00.978 "type": "rebuild", 00:13:00.978 "target": "spare", 00:13:00.978 "progress": { 00:13:00.978 "blocks": 32768, 00:13:00.978 "percent": 50 00:13:00.978 } 00:13:00.978 }, 00:13:00.978 "base_bdevs_list": [ 00:13:00.978 { 00:13:00.978 "name": "spare", 00:13:00.978 "uuid": "c0dbfec2-d2b1-55b8-a1c6-1894788754e0", 00:13:00.978 "is_configured": true, 00:13:00.978 "data_offset": 0, 00:13:00.978 "data_size": 65536 00:13:00.978 }, 00:13:00.978 { 00:13:00.978 "name": "BaseBdev2", 00:13:00.978 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:13:00.978 "is_configured": true, 00:13:00.978 "data_offset": 0, 00:13:00.978 "data_size": 65536 00:13:00.978 } 00:13:00.978 ] 00:13:00.978 }' 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.978 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.979 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.979 [2024-11-26 20:25:54.490495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:00.979 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.979 20:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:01.545 [2024-11-26 20:25:54.822884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:01.545 [2024-11-26 20:25:54.823367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:01.545 120.00 IOPS, 360.00 MiB/s [2024-11-26T20:25:55.097Z] [2024-11-26 20:25:55.041045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:02.114 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.115 "name": "raid_bdev1", 00:13:02.115 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:13:02.115 "strip_size_kb": 0, 00:13:02.115 "state": "online", 00:13:02.115 "raid_level": "raid1", 00:13:02.115 "superblock": false, 00:13:02.115 "num_base_bdevs": 2, 00:13:02.115 "num_base_bdevs_discovered": 2, 00:13:02.115 "num_base_bdevs_operational": 2, 00:13:02.115 "process": { 00:13:02.115 "type": "rebuild", 00:13:02.115 "target": "spare", 00:13:02.115 "progress": { 00:13:02.115 "blocks": 47104, 00:13:02.115 "percent": 71 00:13:02.115 } 00:13:02.115 }, 00:13:02.115 "base_bdevs_list": [ 00:13:02.115 { 00:13:02.115 "name": "spare", 00:13:02.115 "uuid": "c0dbfec2-d2b1-55b8-a1c6-1894788754e0", 00:13:02.115 "is_configured": true, 00:13:02.115 "data_offset": 0, 00:13:02.115 "data_size": 65536 00:13:02.115 }, 00:13:02.115 { 00:13:02.115 "name": "BaseBdev2", 00:13:02.115 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:13:02.115 "is_configured": true, 00:13:02.115 "data_offset": 0, 00:13:02.115 "data_size": 65536 00:13:02.115 } 00:13:02.115 ] 00:13:02.115 }' 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.115 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.374 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.374 20:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:02.374 [2024-11-26 20:25:55.721826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:03.234 106.83 IOPS, 320.50 MiB/s [2024-11-26T20:25:56.786Z] [2024-11-26 20:25:56.491858] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:03.234 [2024-11-26 20:25:56.591671] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:03.234 [2024-11-26 20:25:56.601447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.234 "name": "raid_bdev1", 00:13:03.234 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:13:03.234 "strip_size_kb": 0, 00:13:03.234 "state": "online", 00:13:03.234 "raid_level": "raid1", 00:13:03.234 "superblock": false, 00:13:03.234 "num_base_bdevs": 2, 00:13:03.234 "num_base_bdevs_discovered": 2, 00:13:03.234 "num_base_bdevs_operational": 2, 00:13:03.234 "base_bdevs_list": [ 00:13:03.234 { 00:13:03.234 "name": "spare", 00:13:03.234 "uuid": "c0dbfec2-d2b1-55b8-a1c6-1894788754e0", 00:13:03.234 "is_configured": true, 00:13:03.234 "data_offset": 0, 00:13:03.234 "data_size": 65536 00:13:03.234 }, 00:13:03.234 { 00:13:03.234 "name": "BaseBdev2", 00:13:03.234 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:13:03.234 "is_configured": true, 00:13:03.234 "data_offset": 0, 00:13:03.234 "data_size": 65536 00:13:03.234 } 00:13:03.234 ] 00:13:03.234 }' 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:03.234 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.494 "name": "raid_bdev1", 00:13:03.494 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:13:03.494 "strip_size_kb": 0, 00:13:03.494 "state": "online", 00:13:03.494 "raid_level": "raid1", 00:13:03.494 "superblock": false, 00:13:03.494 "num_base_bdevs": 2, 00:13:03.494 "num_base_bdevs_discovered": 2, 00:13:03.494 "num_base_bdevs_operational": 2, 00:13:03.494 "base_bdevs_list": [ 00:13:03.494 { 00:13:03.494 "name": "spare", 00:13:03.494 "uuid": "c0dbfec2-d2b1-55b8-a1c6-1894788754e0", 00:13:03.494 "is_configured": true, 00:13:03.494 "data_offset": 0, 00:13:03.494 "data_size": 65536 00:13:03.494 }, 00:13:03.494 { 00:13:03.494 "name": "BaseBdev2", 00:13:03.494 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:13:03.494 "is_configured": true, 00:13:03.494 "data_offset": 0, 00:13:03.494 "data_size": 65536 00:13:03.494 } 00:13:03.494 ] 00:13:03.494 }' 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.494 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.495 96.43 IOPS, 289.29 MiB/s [2024-11-26T20:25:57.047Z] 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.495 "name": "raid_bdev1", 00:13:03.495 "uuid": "fbbc31b8-8df0-4088-944d-5372821e6886", 00:13:03.495 "strip_size_kb": 0, 00:13:03.495 "state": "online", 00:13:03.495 "raid_level": "raid1", 00:13:03.495 "superblock": false, 00:13:03.495 "num_base_bdevs": 2, 00:13:03.495 "num_base_bdevs_discovered": 2, 00:13:03.495 "num_base_bdevs_operational": 2, 00:13:03.495 "base_bdevs_list": [ 00:13:03.495 { 00:13:03.495 "name": "spare", 00:13:03.495 "uuid": "c0dbfec2-d2b1-55b8-a1c6-1894788754e0", 00:13:03.495 "is_configured": true, 00:13:03.495 "data_offset": 0, 00:13:03.495 "data_size": 65536 00:13:03.495 }, 00:13:03.495 { 00:13:03.495 "name": "BaseBdev2", 00:13:03.495 "uuid": "0a1455f1-d9a2-58cc-ba7d-2c7eed8eaea5", 00:13:03.495 "is_configured": true, 00:13:03.495 "data_offset": 0, 00:13:03.495 "data_size": 65536 00:13:03.495 } 00:13:03.495 ] 00:13:03.495 }' 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.495 20:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.064 [2024-11-26 20:25:57.354521] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.064 [2024-11-26 20:25:57.354613] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.064 00:13:04.064 Latency(us) 00:13:04.064 [2024-11-26T20:25:57.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.064 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:04.064 raid_bdev1 : 7.54 91.73 275.18 0.00 0.00 14567.25 298.70 110352.32 00:13:04.064 [2024-11-26T20:25:57.616Z] =================================================================================================================== 00:13:04.064 [2024-11-26T20:25:57.616Z] Total : 91.73 275.18 0.00 0.00 14567.25 298.70 110352.32 00:13:04.064 { 00:13:04.064 "results": [ 00:13:04.064 { 00:13:04.064 "job": "raid_bdev1", 00:13:04.064 "core_mask": "0x1", 00:13:04.064 "workload": "randrw", 00:13:04.064 "percentage": 50, 00:13:04.064 "status": "finished", 00:13:04.064 "queue_depth": 2, 00:13:04.064 "io_size": 3145728, 00:13:04.064 "runtime": 7.544194, 00:13:04.064 "iops": 91.72616716908394, 00:13:04.064 "mibps": 275.1785015072518, 00:13:04.064 "io_failed": 0, 00:13:04.064 "io_timeout": 0, 00:13:04.064 "avg_latency_us": 14567.247151475376, 00:13:04.064 "min_latency_us": 298.70393013100437, 00:13:04.064 "max_latency_us": 110352.32139737991 00:13:04.064 } 00:13:04.064 ], 00:13:04.064 "core_count": 1 00:13:04.064 } 00:13:04.064 [2024-11-26 20:25:57.467080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.064 [2024-11-26 20:25:57.467136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.064 [2024-11-26 20:25:57.467219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.064 [2024-11-26 20:25:57.467233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.064 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:04.336 /dev/nbd0 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.336 1+0 records in 00:13:04.336 1+0 records out 00:13:04.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332307 s, 12.3 MB/s 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.336 20:25:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:04.621 /dev/nbd1 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.621 1+0 records in 00:13:04.621 1+0 records out 00:13:04.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270965 s, 15.1 MB/s 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.621 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.881 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87674 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87674 ']' 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87674 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87674 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87674' 00:13:05.140 killing process with pid 87674 00:13:05.140 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87674 00:13:05.140 Received shutdown signal, test time was about 8.726301 seconds 00:13:05.141 00:13:05.141 Latency(us) 00:13:05.141 [2024-11-26T20:25:58.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.141 [2024-11-26T20:25:58.693Z] =================================================================================================================== 00:13:05.141 [2024-11-26T20:25:58.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:05.141 [2024-11-26 20:25:58.642257] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.141 20:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87674 00:13:05.141 [2024-11-26 20:25:58.684354] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.709 ************************************ 00:13:05.709 END TEST raid_rebuild_test_io 00:13:05.709 ************************************ 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:05.709 00:13:05.709 real 0m10.804s 00:13:05.709 user 0m13.892s 00:13:05.709 sys 0m1.451s 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.709 20:25:59 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:05.709 20:25:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:05.709 20:25:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.709 20:25:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.709 ************************************ 00:13:05.709 START TEST raid_rebuild_test_sb_io 00:13:05.709 ************************************ 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88033 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88033 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 88033 ']' 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.709 20:25:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.709 [2024-11-26 20:25:59.198518] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:05.709 [2024-11-26 20:25:59.198689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88033 ] 00:13:05.709 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:05.709 Zero copy mechanism will not be used. 00:13:05.967 [2024-11-26 20:25:59.359202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.967 [2024-11-26 20:25:59.439303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.967 [2024-11-26 20:25:59.514867] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.967 [2024-11-26 20:25:59.514905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.535 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.535 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:06.535 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.535 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.535 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.535 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 BaseBdev1_malloc 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 [2024-11-26 20:26:00.100483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:06.796 [2024-11-26 20:26:00.100639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.796 [2024-11-26 20:26:00.100708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:06.796 [2024-11-26 20:26:00.100760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.796 [2024-11-26 20:26:00.103053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.796 [2024-11-26 20:26:00.103129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.796 BaseBdev1 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 BaseBdev2_malloc 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 [2024-11-26 20:26:00.144894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:06.796 [2024-11-26 20:26:00.144961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.796 [2024-11-26 20:26:00.145001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:06.796 [2024-11-26 20:26:00.145011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.796 [2024-11-26 20:26:00.147409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.796 [2024-11-26 20:26:00.147446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:06.796 BaseBdev2 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 spare_malloc 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 spare_delay 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 [2024-11-26 20:26:00.186649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.796 [2024-11-26 20:26:00.186757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.796 [2024-11-26 20:26:00.186822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:06.796 [2024-11-26 20:26:00.186868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.796 [2024-11-26 20:26:00.189132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.796 [2024-11-26 20:26:00.189203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.796 spare 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.796 [2024-11-26 20:26:00.198668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.796 [2024-11-26 20:26:00.200550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.796 [2024-11-26 20:26:00.200738] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:06.796 [2024-11-26 20:26:00.200753] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.796 [2024-11-26 20:26:00.200999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:06.796 [2024-11-26 20:26:00.201157] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:06.796 [2024-11-26 20:26:00.201169] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:06.796 [2024-11-26 20:26:00.201294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.796 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.797 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.797 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.797 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.797 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.797 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.797 "name": "raid_bdev1", 00:13:06.797 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:06.797 "strip_size_kb": 0, 00:13:06.797 "state": "online", 00:13:06.797 "raid_level": "raid1", 00:13:06.797 "superblock": true, 00:13:06.797 "num_base_bdevs": 2, 00:13:06.797 "num_base_bdevs_discovered": 2, 00:13:06.797 "num_base_bdevs_operational": 2, 00:13:06.797 "base_bdevs_list": [ 00:13:06.797 { 00:13:06.797 "name": "BaseBdev1", 00:13:06.797 "uuid": "63060352-0ac6-5796-afea-dfd5b64c823a", 00:13:06.797 "is_configured": true, 00:13:06.797 "data_offset": 2048, 00:13:06.797 "data_size": 63488 00:13:06.797 }, 00:13:06.797 { 00:13:06.797 "name": "BaseBdev2", 00:13:06.797 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:06.797 "is_configured": true, 00:13:06.797 "data_offset": 2048, 00:13:06.797 "data_size": 63488 00:13:06.797 } 00:13:06.797 ] 00:13:06.797 }' 00:13:06.797 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.797 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 [2024-11-26 20:26:00.682205] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 [2024-11-26 20:26:00.769732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.367 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.368 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.368 "name": "raid_bdev1", 00:13:07.368 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:07.368 "strip_size_kb": 0, 00:13:07.368 "state": "online", 00:13:07.368 "raid_level": "raid1", 00:13:07.368 "superblock": true, 00:13:07.368 "num_base_bdevs": 2, 00:13:07.368 "num_base_bdevs_discovered": 1, 00:13:07.368 "num_base_bdevs_operational": 1, 00:13:07.368 "base_bdevs_list": [ 00:13:07.368 { 00:13:07.368 "name": null, 00:13:07.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.368 "is_configured": false, 00:13:07.368 "data_offset": 0, 00:13:07.368 "data_size": 63488 00:13:07.368 }, 00:13:07.368 { 00:13:07.368 "name": "BaseBdev2", 00:13:07.368 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:07.368 "is_configured": true, 00:13:07.368 "data_offset": 2048, 00:13:07.368 "data_size": 63488 00:13:07.368 } 00:13:07.368 ] 00:13:07.368 }' 00:13:07.368 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.368 20:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.368 [2024-11-26 20:26:00.864309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:07.368 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:07.368 Zero copy mechanism will not be used. 00:13:07.368 Running I/O for 60 seconds... 00:13:07.938 20:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.938 20:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.938 20:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.938 [2024-11-26 20:26:01.199565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.938 20:26:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.938 20:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:07.938 [2024-11-26 20:26:01.248531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:07.938 [2024-11-26 20:26:01.250676] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.938 [2024-11-26 20:26:01.367405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:08.197 [2024-11-26 20:26:01.623319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:08.197 [2024-11-26 20:26:01.623773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:08.456 205.00 IOPS, 615.00 MiB/s [2024-11-26T20:26:02.008Z] [2024-11-26 20:26:01.951990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:08.715 [2024-11-26 20:26:02.156133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:08.715 [2024-11-26 20:26:02.156586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.715 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.974 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.974 "name": "raid_bdev1", 00:13:08.974 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:08.974 "strip_size_kb": 0, 00:13:08.974 "state": "online", 00:13:08.974 "raid_level": "raid1", 00:13:08.974 "superblock": true, 00:13:08.974 "num_base_bdevs": 2, 00:13:08.974 "num_base_bdevs_discovered": 2, 00:13:08.974 "num_base_bdevs_operational": 2, 00:13:08.974 "process": { 00:13:08.974 "type": "rebuild", 00:13:08.974 "target": "spare", 00:13:08.974 "progress": { 00:13:08.974 "blocks": 10240, 00:13:08.974 "percent": 16 00:13:08.974 } 00:13:08.974 }, 00:13:08.974 "base_bdevs_list": [ 00:13:08.974 { 00:13:08.974 "name": "spare", 00:13:08.974 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:08.974 "is_configured": true, 00:13:08.974 "data_offset": 2048, 00:13:08.974 "data_size": 63488 00:13:08.974 }, 00:13:08.974 { 00:13:08.974 "name": "BaseBdev2", 00:13:08.974 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:08.974 "is_configured": true, 00:13:08.974 "data_offset": 2048, 00:13:08.974 "data_size": 63488 00:13:08.974 } 00:13:08.974 ] 00:13:08.974 }' 00:13:08.974 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.974 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.974 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.974 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.974 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:08.974 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.974 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.974 [2024-11-26 20:26:02.387412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.974 [2024-11-26 20:26:02.515496] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:09.234 [2024-11-26 20:26:02.526241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.234 [2024-11-26 20:26:02.526359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:09.234 [2024-11-26 20:26:02.526379] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:09.234 [2024-11-26 20:26:02.547307] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.234 "name": "raid_bdev1", 00:13:09.234 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:09.234 "strip_size_kb": 0, 00:13:09.234 "state": "online", 00:13:09.234 "raid_level": "raid1", 00:13:09.234 "superblock": true, 00:13:09.234 "num_base_bdevs": 2, 00:13:09.234 "num_base_bdevs_discovered": 1, 00:13:09.234 "num_base_bdevs_operational": 1, 00:13:09.234 "base_bdevs_list": [ 00:13:09.234 { 00:13:09.234 "name": null, 00:13:09.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.234 "is_configured": false, 00:13:09.234 "data_offset": 0, 00:13:09.234 "data_size": 63488 00:13:09.234 }, 00:13:09.234 { 00:13:09.234 "name": "BaseBdev2", 00:13:09.234 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:09.234 "is_configured": true, 00:13:09.234 "data_offset": 2048, 00:13:09.234 "data_size": 63488 00:13:09.234 } 00:13:09.234 ] 00:13:09.234 }' 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.234 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.495 164.50 IOPS, 493.50 MiB/s [2024-11-26T20:26:03.047Z] 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.495 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.495 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.495 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.495 20:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.495 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.495 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.495 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.495 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.495 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.495 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.495 "name": "raid_bdev1", 00:13:09.495 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:09.495 "strip_size_kb": 0, 00:13:09.495 "state": "online", 00:13:09.495 "raid_level": "raid1", 00:13:09.495 "superblock": true, 00:13:09.495 "num_base_bdevs": 2, 00:13:09.495 "num_base_bdevs_discovered": 1, 00:13:09.495 "num_base_bdevs_operational": 1, 00:13:09.495 "base_bdevs_list": [ 00:13:09.495 { 00:13:09.495 "name": null, 00:13:09.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.495 "is_configured": false, 00:13:09.495 "data_offset": 0, 00:13:09.495 "data_size": 63488 00:13:09.495 }, 00:13:09.495 { 00:13:09.495 "name": "BaseBdev2", 00:13:09.495 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:09.495 "is_configured": true, 00:13:09.495 "data_offset": 2048, 00:13:09.495 "data_size": 63488 00:13:09.495 } 00:13:09.495 ] 00:13:09.495 }' 00:13:09.495 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.754 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.754 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.754 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.754 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.754 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.754 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.754 [2024-11-26 20:26:03.134891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.754 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.754 20:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:09.754 [2024-11-26 20:26:03.186266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:09.754 [2024-11-26 20:26:03.188559] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.014 [2024-11-26 20:26:03.313131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:10.014 [2024-11-26 20:26:03.313697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:10.014 [2024-11-26 20:26:03.450999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:10.273 [2024-11-26 20:26:03.780988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:10.273 [2024-11-26 20:26:03.781769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:10.533 170.00 IOPS, 510.00 MiB/s [2024-11-26T20:26:04.085Z] [2024-11-26 20:26:03.998241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.533 [2024-11-26 20:26:03.998760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.793 "name": "raid_bdev1", 00:13:10.793 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:10.793 "strip_size_kb": 0, 00:13:10.793 "state": "online", 00:13:10.793 "raid_level": "raid1", 00:13:10.793 "superblock": true, 00:13:10.793 "num_base_bdevs": 2, 00:13:10.793 "num_base_bdevs_discovered": 2, 00:13:10.793 "num_base_bdevs_operational": 2, 00:13:10.793 "process": { 00:13:10.793 "type": "rebuild", 00:13:10.793 "target": "spare", 00:13:10.793 "progress": { 00:13:10.793 "blocks": 10240, 00:13:10.793 "percent": 16 00:13:10.793 } 00:13:10.793 }, 00:13:10.793 "base_bdevs_list": [ 00:13:10.793 { 00:13:10.793 "name": "spare", 00:13:10.793 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:10.793 "is_configured": true, 00:13:10.793 "data_offset": 2048, 00:13:10.793 "data_size": 63488 00:13:10.793 }, 00:13:10.793 { 00:13:10.793 "name": "BaseBdev2", 00:13:10.793 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:10.793 "is_configured": true, 00:13:10.793 "data_offset": 2048, 00:13:10.793 "data_size": 63488 00:13:10.793 } 00:13:10.793 ] 00:13:10.793 }' 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:10.793 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=353 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.793 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.051 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.051 "name": "raid_bdev1", 00:13:11.051 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:11.051 "strip_size_kb": 0, 00:13:11.051 "state": "online", 00:13:11.051 "raid_level": "raid1", 00:13:11.051 "superblock": true, 00:13:11.051 "num_base_bdevs": 2, 00:13:11.051 "num_base_bdevs_discovered": 2, 00:13:11.051 "num_base_bdevs_operational": 2, 00:13:11.051 "process": { 00:13:11.051 "type": "rebuild", 00:13:11.051 "target": "spare", 00:13:11.051 "progress": { 00:13:11.051 "blocks": 12288, 00:13:11.051 "percent": 19 00:13:11.051 } 00:13:11.051 }, 00:13:11.051 "base_bdevs_list": [ 00:13:11.051 { 00:13:11.051 "name": "spare", 00:13:11.051 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:11.051 "is_configured": true, 00:13:11.051 "data_offset": 2048, 00:13:11.051 "data_size": 63488 00:13:11.051 }, 00:13:11.051 { 00:13:11.051 "name": "BaseBdev2", 00:13:11.051 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:11.051 "is_configured": true, 00:13:11.051 "data_offset": 2048, 00:13:11.051 "data_size": 63488 00:13:11.051 } 00:13:11.051 ] 00:13:11.051 }' 00:13:11.051 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.051 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.051 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.051 [2024-11-26 20:26:04.443943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:11.052 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.052 20:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.620 [2024-11-26 20:26:04.865065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:11.620 [2024-11-26 20:26:04.865706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:12.189 146.75 IOPS, 440.25 MiB/s [2024-11-26T20:26:05.741Z] 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.189 "name": "raid_bdev1", 00:13:12.189 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:12.189 "strip_size_kb": 0, 00:13:12.189 "state": "online", 00:13:12.189 "raid_level": "raid1", 00:13:12.189 "superblock": true, 00:13:12.189 "num_base_bdevs": 2, 00:13:12.189 "num_base_bdevs_discovered": 2, 00:13:12.189 "num_base_bdevs_operational": 2, 00:13:12.189 "process": { 00:13:12.189 "type": "rebuild", 00:13:12.189 "target": "spare", 00:13:12.189 "progress": { 00:13:12.189 "blocks": 30720, 00:13:12.189 "percent": 48 00:13:12.189 } 00:13:12.189 }, 00:13:12.189 "base_bdevs_list": [ 00:13:12.189 { 00:13:12.189 "name": "spare", 00:13:12.189 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:12.189 "is_configured": true, 00:13:12.189 "data_offset": 2048, 00:13:12.189 "data_size": 63488 00:13:12.189 }, 00:13:12.189 { 00:13:12.189 "name": "BaseBdev2", 00:13:12.189 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:12.189 "is_configured": true, 00:13:12.189 "data_offset": 2048, 00:13:12.189 "data_size": 63488 00:13:12.189 } 00:13:12.189 ] 00:13:12.189 }' 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.189 [2024-11-26 20:26:05.550406] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.189 20:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:12.448 130.40 IOPS, 391.20 MiB/s [2024-11-26T20:26:06.000Z] [2024-11-26 20:26:05.964029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:12.448 [2024-11-26 20:26:05.964803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:13.016 [2024-11-26 20:26:06.321700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:13.016 [2024-11-26 20:26:06.430258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.275 "name": "raid_bdev1", 00:13:13.275 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:13.275 "strip_size_kb": 0, 00:13:13.275 "state": "online", 00:13:13.275 "raid_level": "raid1", 00:13:13.275 "superblock": true, 00:13:13.275 "num_base_bdevs": 2, 00:13:13.275 "num_base_bdevs_discovered": 2, 00:13:13.275 "num_base_bdevs_operational": 2, 00:13:13.275 "process": { 00:13:13.275 "type": "rebuild", 00:13:13.275 "target": "spare", 00:13:13.275 "progress": { 00:13:13.275 "blocks": 47104, 00:13:13.275 "percent": 74 00:13:13.275 } 00:13:13.275 }, 00:13:13.275 "base_bdevs_list": [ 00:13:13.275 { 00:13:13.275 "name": "spare", 00:13:13.275 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:13.275 "is_configured": true, 00:13:13.275 "data_offset": 2048, 00:13:13.275 "data_size": 63488 00:13:13.275 }, 00:13:13.275 { 00:13:13.275 "name": "BaseBdev2", 00:13:13.275 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:13.275 "is_configured": true, 00:13:13.275 "data_offset": 2048, 00:13:13.275 "data_size": 63488 00:13:13.275 } 00:13:13.275 ] 00:13:13.275 }' 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.275 20:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.276 [2024-11-26 20:26:06.754352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:14.103 115.17 IOPS, 345.50 MiB/s [2024-11-26T20:26:07.655Z] [2024-11-26 20:26:07.415840] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:14.103 [2024-11-26 20:26:07.510532] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:14.103 [2024-11-26 20:26:07.514541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.362 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.362 "name": "raid_bdev1", 00:13:14.362 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:14.362 "strip_size_kb": 0, 00:13:14.362 "state": "online", 00:13:14.362 "raid_level": "raid1", 00:13:14.362 "superblock": true, 00:13:14.362 "num_base_bdevs": 2, 00:13:14.362 "num_base_bdevs_discovered": 2, 00:13:14.362 "num_base_bdevs_operational": 2, 00:13:14.362 "base_bdevs_list": [ 00:13:14.362 { 00:13:14.362 "name": "spare", 00:13:14.362 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:14.362 "is_configured": true, 00:13:14.363 "data_offset": 2048, 00:13:14.363 "data_size": 63488 00:13:14.363 }, 00:13:14.363 { 00:13:14.363 "name": "BaseBdev2", 00:13:14.363 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:14.363 "is_configured": true, 00:13:14.363 "data_offset": 2048, 00:13:14.363 "data_size": 63488 00:13:14.363 } 00:13:14.363 ] 00:13:14.363 }' 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.363 103.43 IOPS, 310.29 MiB/s [2024-11-26T20:26:07.915Z] 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.363 "name": "raid_bdev1", 00:13:14.363 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:14.363 "strip_size_kb": 0, 00:13:14.363 "state": "online", 00:13:14.363 "raid_level": "raid1", 00:13:14.363 "superblock": true, 00:13:14.363 "num_base_bdevs": 2, 00:13:14.363 "num_base_bdevs_discovered": 2, 00:13:14.363 "num_base_bdevs_operational": 2, 00:13:14.363 "base_bdevs_list": [ 00:13:14.363 { 00:13:14.363 "name": "spare", 00:13:14.363 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:14.363 "is_configured": true, 00:13:14.363 "data_offset": 2048, 00:13:14.363 "data_size": 63488 00:13:14.363 }, 00:13:14.363 { 00:13:14.363 "name": "BaseBdev2", 00:13:14.363 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:14.363 "is_configured": true, 00:13:14.363 "data_offset": 2048, 00:13:14.363 "data_size": 63488 00:13:14.363 } 00:13:14.363 ] 00:13:14.363 }' 00:13:14.363 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.623 20:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.623 "name": "raid_bdev1", 00:13:14.623 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:14.623 "strip_size_kb": 0, 00:13:14.623 "state": "online", 00:13:14.623 "raid_level": "raid1", 00:13:14.623 "superblock": true, 00:13:14.623 "num_base_bdevs": 2, 00:13:14.623 "num_base_bdevs_discovered": 2, 00:13:14.623 "num_base_bdevs_operational": 2, 00:13:14.623 "base_bdevs_list": [ 00:13:14.623 { 00:13:14.623 "name": "spare", 00:13:14.623 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:14.623 "is_configured": true, 00:13:14.623 "data_offset": 2048, 00:13:14.623 "data_size": 63488 00:13:14.623 }, 00:13:14.623 { 00:13:14.623 "name": "BaseBdev2", 00:13:14.623 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:14.623 "is_configured": true, 00:13:14.623 "data_offset": 2048, 00:13:14.623 "data_size": 63488 00:13:14.623 } 00:13:14.623 ] 00:13:14.623 }' 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.623 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.193 [2024-11-26 20:26:08.453755] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.193 [2024-11-26 20:26:08.453848] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.193 00:13:15.193 Latency(us) 00:13:15.193 [2024-11-26T20:26:08.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.193 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:15.193 raid_bdev1 : 7.64 97.28 291.84 0.00 0.00 13419.59 295.13 114015.47 00:13:15.193 [2024-11-26T20:26:08.745Z] =================================================================================================================== 00:13:15.193 [2024-11-26T20:26:08.745Z] Total : 97.28 291.84 0.00 0.00 13419.59 295.13 114015.47 00:13:15.193 [2024-11-26 20:26:08.494249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.193 [2024-11-26 20:26:08.494342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.193 [2024-11-26 20:26:08.494470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.193 [2024-11-26 20:26:08.494529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:15.193 { 00:13:15.193 "results": [ 00:13:15.193 { 00:13:15.193 "job": "raid_bdev1", 00:13:15.193 "core_mask": "0x1", 00:13:15.193 "workload": "randrw", 00:13:15.193 "percentage": 50, 00:13:15.193 "status": "finished", 00:13:15.193 "queue_depth": 2, 00:13:15.193 "io_size": 3145728, 00:13:15.193 "runtime": 7.637759, 00:13:15.193 "iops": 97.27984347241122, 00:13:15.193 "mibps": 291.8395304172336, 00:13:15.193 "io_failed": 0, 00:13:15.193 "io_timeout": 0, 00:13:15.193 "avg_latency_us": 13419.589601932446, 00:13:15.193 "min_latency_us": 295.12663755458516, 00:13:15.193 "max_latency_us": 114015.46899563319 00:13:15.193 } 00:13:15.193 ], 00:13:15.193 "core_count": 1 00:13:15.193 } 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.193 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:15.453 /dev/nbd0 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.453 1+0 records in 00:13:15.453 1+0 records out 00:13:15.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553585 s, 7.4 MB/s 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.453 20:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:15.715 /dev/nbd1 00:13:15.715 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.716 1+0 records in 00:13:15.716 1+0 records out 00:13:15.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348408 s, 11.8 MB/s 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.716 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.994 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.252 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.252 [2024-11-26 20:26:09.717430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.252 [2024-11-26 20:26:09.717564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.252 [2024-11-26 20:26:09.717631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:16.252 [2024-11-26 20:26:09.717678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.252 [2024-11-26 20:26:09.720294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.252 [2024-11-26 20:26:09.720390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.253 [2024-11-26 20:26:09.720530] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:16.253 [2024-11-26 20:26:09.720605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.253 [2024-11-26 20:26:09.720798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.253 spare 00:13:16.253 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.253 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:16.253 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.253 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.513 [2024-11-26 20:26:09.820784] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:16.513 [2024-11-26 20:26:09.820901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.513 [2024-11-26 20:26:09.821319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:13:16.513 [2024-11-26 20:26:09.821572] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:16.513 [2024-11-26 20:26:09.821647] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:16.513 [2024-11-26 20:26:09.821903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.513 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.514 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.514 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.514 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.514 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.514 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.514 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.514 "name": "raid_bdev1", 00:13:16.514 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:16.514 "strip_size_kb": 0, 00:13:16.514 "state": "online", 00:13:16.514 "raid_level": "raid1", 00:13:16.514 "superblock": true, 00:13:16.514 "num_base_bdevs": 2, 00:13:16.514 "num_base_bdevs_discovered": 2, 00:13:16.514 "num_base_bdevs_operational": 2, 00:13:16.514 "base_bdevs_list": [ 00:13:16.514 { 00:13:16.514 "name": "spare", 00:13:16.514 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:16.514 "is_configured": true, 00:13:16.514 "data_offset": 2048, 00:13:16.514 "data_size": 63488 00:13:16.514 }, 00:13:16.514 { 00:13:16.514 "name": "BaseBdev2", 00:13:16.514 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:16.514 "is_configured": true, 00:13:16.514 "data_offset": 2048, 00:13:16.514 "data_size": 63488 00:13:16.514 } 00:13:16.514 ] 00:13:16.514 }' 00:13:16.514 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.514 20:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.774 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.034 "name": "raid_bdev1", 00:13:17.034 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:17.034 "strip_size_kb": 0, 00:13:17.034 "state": "online", 00:13:17.034 "raid_level": "raid1", 00:13:17.034 "superblock": true, 00:13:17.034 "num_base_bdevs": 2, 00:13:17.034 "num_base_bdevs_discovered": 2, 00:13:17.034 "num_base_bdevs_operational": 2, 00:13:17.034 "base_bdevs_list": [ 00:13:17.034 { 00:13:17.034 "name": "spare", 00:13:17.034 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:17.034 "is_configured": true, 00:13:17.034 "data_offset": 2048, 00:13:17.034 "data_size": 63488 00:13:17.034 }, 00:13:17.034 { 00:13:17.034 "name": "BaseBdev2", 00:13:17.034 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:17.034 "is_configured": true, 00:13:17.034 "data_offset": 2048, 00:13:17.034 "data_size": 63488 00:13:17.034 } 00:13:17.034 ] 00:13:17.034 }' 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.034 [2024-11-26 20:26:10.492875] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.034 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.035 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.035 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.035 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.035 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.035 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.035 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.035 "name": "raid_bdev1", 00:13:17.035 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:17.035 "strip_size_kb": 0, 00:13:17.035 "state": "online", 00:13:17.035 "raid_level": "raid1", 00:13:17.035 "superblock": true, 00:13:17.035 "num_base_bdevs": 2, 00:13:17.035 "num_base_bdevs_discovered": 1, 00:13:17.035 "num_base_bdevs_operational": 1, 00:13:17.035 "base_bdevs_list": [ 00:13:17.035 { 00:13:17.035 "name": null, 00:13:17.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.035 "is_configured": false, 00:13:17.035 "data_offset": 0, 00:13:17.035 "data_size": 63488 00:13:17.035 }, 00:13:17.035 { 00:13:17.035 "name": "BaseBdev2", 00:13:17.035 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:17.035 "is_configured": true, 00:13:17.035 "data_offset": 2048, 00:13:17.035 "data_size": 63488 00:13:17.035 } 00:13:17.035 ] 00:13:17.035 }' 00:13:17.035 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.035 20:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.604 20:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:17.604 20:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.604 20:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.604 [2024-11-26 20:26:11.048109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.604 [2024-11-26 20:26:11.048384] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:17.604 [2024-11-26 20:26:11.048452] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:17.604 [2024-11-26 20:26:11.048539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.604 [2024-11-26 20:26:11.054444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:13:17.604 20:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.604 20:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:17.604 [2024-11-26 20:26:11.056485] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.541 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.801 "name": "raid_bdev1", 00:13:18.801 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:18.801 "strip_size_kb": 0, 00:13:18.801 "state": "online", 00:13:18.801 "raid_level": "raid1", 00:13:18.801 "superblock": true, 00:13:18.801 "num_base_bdevs": 2, 00:13:18.801 "num_base_bdevs_discovered": 2, 00:13:18.801 "num_base_bdevs_operational": 2, 00:13:18.801 "process": { 00:13:18.801 "type": "rebuild", 00:13:18.801 "target": "spare", 00:13:18.801 "progress": { 00:13:18.801 "blocks": 20480, 00:13:18.801 "percent": 32 00:13:18.801 } 00:13:18.801 }, 00:13:18.801 "base_bdevs_list": [ 00:13:18.801 { 00:13:18.801 "name": "spare", 00:13:18.801 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:18.801 "is_configured": true, 00:13:18.801 "data_offset": 2048, 00:13:18.801 "data_size": 63488 00:13:18.801 }, 00:13:18.801 { 00:13:18.801 "name": "BaseBdev2", 00:13:18.801 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:18.801 "is_configured": true, 00:13:18.801 "data_offset": 2048, 00:13:18.801 "data_size": 63488 00:13:18.801 } 00:13:18.801 ] 00:13:18.801 }' 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.801 [2024-11-26 20:26:12.205004] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.801 [2024-11-26 20:26:12.263460] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.801 [2024-11-26 20:26:12.263661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.801 [2024-11-26 20:26:12.263705] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.801 [2024-11-26 20:26:12.263728] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.801 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.802 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.802 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.802 "name": "raid_bdev1", 00:13:18.802 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:18.802 "strip_size_kb": 0, 00:13:18.802 "state": "online", 00:13:18.802 "raid_level": "raid1", 00:13:18.802 "superblock": true, 00:13:18.802 "num_base_bdevs": 2, 00:13:18.802 "num_base_bdevs_discovered": 1, 00:13:18.802 "num_base_bdevs_operational": 1, 00:13:18.802 "base_bdevs_list": [ 00:13:18.802 { 00:13:18.802 "name": null, 00:13:18.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.802 "is_configured": false, 00:13:18.802 "data_offset": 0, 00:13:18.802 "data_size": 63488 00:13:18.802 }, 00:13:18.802 { 00:13:18.802 "name": "BaseBdev2", 00:13:18.802 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:18.802 "is_configured": true, 00:13:18.802 "data_offset": 2048, 00:13:18.802 "data_size": 63488 00:13:18.802 } 00:13:18.802 ] 00:13:18.802 }' 00:13:18.802 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.802 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.372 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.372 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.372 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.372 [2024-11-26 20:26:12.749828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.372 [2024-11-26 20:26:12.749960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.372 [2024-11-26 20:26:12.750028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:19.372 [2024-11-26 20:26:12.750071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.372 [2024-11-26 20:26:12.750597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.372 [2024-11-26 20:26:12.750690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.372 [2024-11-26 20:26:12.750842] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:19.372 [2024-11-26 20:26:12.750890] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:19.372 [2024-11-26 20:26:12.750944] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:19.372 [2024-11-26 20:26:12.751006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.372 [2024-11-26 20:26:12.757135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:19.372 spare 00:13:19.372 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.372 20:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:19.372 [2024-11-26 20:26:12.759378] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.333 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.333 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.333 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.333 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.333 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.333 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.333 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.333 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.334 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.334 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.334 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.334 "name": "raid_bdev1", 00:13:20.334 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:20.334 "strip_size_kb": 0, 00:13:20.334 "state": "online", 00:13:20.334 "raid_level": "raid1", 00:13:20.334 "superblock": true, 00:13:20.334 "num_base_bdevs": 2, 00:13:20.334 "num_base_bdevs_discovered": 2, 00:13:20.334 "num_base_bdevs_operational": 2, 00:13:20.334 "process": { 00:13:20.334 "type": "rebuild", 00:13:20.334 "target": "spare", 00:13:20.334 "progress": { 00:13:20.334 "blocks": 20480, 00:13:20.334 "percent": 32 00:13:20.334 } 00:13:20.334 }, 00:13:20.334 "base_bdevs_list": [ 00:13:20.334 { 00:13:20.334 "name": "spare", 00:13:20.334 "uuid": "99ab8bc1-371e-5506-aa46-901b502ec83f", 00:13:20.334 "is_configured": true, 00:13:20.334 "data_offset": 2048, 00:13:20.334 "data_size": 63488 00:13:20.334 }, 00:13:20.334 { 00:13:20.334 "name": "BaseBdev2", 00:13:20.334 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:20.334 "is_configured": true, 00:13:20.334 "data_offset": 2048, 00:13:20.334 "data_size": 63488 00:13:20.334 } 00:13:20.334 ] 00:13:20.334 }' 00:13:20.334 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.334 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.334 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.595 [2024-11-26 20:26:13.923945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.595 [2024-11-26 20:26:13.966722] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.595 [2024-11-26 20:26:13.966935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.595 [2024-11-26 20:26:13.966955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.595 [2024-11-26 20:26:13.966970] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.595 20:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.595 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.595 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.595 "name": "raid_bdev1", 00:13:20.595 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:20.595 "strip_size_kb": 0, 00:13:20.595 "state": "online", 00:13:20.595 "raid_level": "raid1", 00:13:20.595 "superblock": true, 00:13:20.595 "num_base_bdevs": 2, 00:13:20.595 "num_base_bdevs_discovered": 1, 00:13:20.595 "num_base_bdevs_operational": 1, 00:13:20.595 "base_bdevs_list": [ 00:13:20.595 { 00:13:20.595 "name": null, 00:13:20.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.595 "is_configured": false, 00:13:20.595 "data_offset": 0, 00:13:20.595 "data_size": 63488 00:13:20.595 }, 00:13:20.595 { 00:13:20.595 "name": "BaseBdev2", 00:13:20.595 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:20.595 "is_configured": true, 00:13:20.595 "data_offset": 2048, 00:13:20.595 "data_size": 63488 00:13:20.595 } 00:13:20.595 ] 00:13:20.595 }' 00:13:20.595 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.595 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.165 "name": "raid_bdev1", 00:13:21.165 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:21.165 "strip_size_kb": 0, 00:13:21.165 "state": "online", 00:13:21.165 "raid_level": "raid1", 00:13:21.165 "superblock": true, 00:13:21.165 "num_base_bdevs": 2, 00:13:21.165 "num_base_bdevs_discovered": 1, 00:13:21.165 "num_base_bdevs_operational": 1, 00:13:21.165 "base_bdevs_list": [ 00:13:21.165 { 00:13:21.165 "name": null, 00:13:21.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.165 "is_configured": false, 00:13:21.165 "data_offset": 0, 00:13:21.165 "data_size": 63488 00:13:21.165 }, 00:13:21.165 { 00:13:21.165 "name": "BaseBdev2", 00:13:21.165 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:21.165 "is_configured": true, 00:13:21.165 "data_offset": 2048, 00:13:21.165 "data_size": 63488 00:13:21.165 } 00:13:21.165 ] 00:13:21.165 }' 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.165 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.165 [2024-11-26 20:26:14.613101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:21.165 [2024-11-26 20:26:14.613249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.165 [2024-11-26 20:26:14.613279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:21.165 [2024-11-26 20:26:14.613293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.165 [2024-11-26 20:26:14.613789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.166 [2024-11-26 20:26:14.613815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.166 [2024-11-26 20:26:14.613904] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:21.166 [2024-11-26 20:26:14.613924] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:21.166 [2024-11-26 20:26:14.613933] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:21.166 [2024-11-26 20:26:14.613950] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:21.166 BaseBdev1 00:13:21.166 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.166 20:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.101 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.360 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.360 "name": "raid_bdev1", 00:13:22.360 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:22.360 "strip_size_kb": 0, 00:13:22.360 "state": "online", 00:13:22.360 "raid_level": "raid1", 00:13:22.360 "superblock": true, 00:13:22.360 "num_base_bdevs": 2, 00:13:22.360 "num_base_bdevs_discovered": 1, 00:13:22.360 "num_base_bdevs_operational": 1, 00:13:22.360 "base_bdevs_list": [ 00:13:22.360 { 00:13:22.360 "name": null, 00:13:22.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.360 "is_configured": false, 00:13:22.360 "data_offset": 0, 00:13:22.360 "data_size": 63488 00:13:22.360 }, 00:13:22.360 { 00:13:22.360 "name": "BaseBdev2", 00:13:22.360 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:22.360 "is_configured": true, 00:13:22.360 "data_offset": 2048, 00:13:22.360 "data_size": 63488 00:13:22.360 } 00:13:22.360 ] 00:13:22.360 }' 00:13:22.360 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.360 20:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.619 "name": "raid_bdev1", 00:13:22.619 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:22.619 "strip_size_kb": 0, 00:13:22.619 "state": "online", 00:13:22.619 "raid_level": "raid1", 00:13:22.619 "superblock": true, 00:13:22.619 "num_base_bdevs": 2, 00:13:22.619 "num_base_bdevs_discovered": 1, 00:13:22.619 "num_base_bdevs_operational": 1, 00:13:22.619 "base_bdevs_list": [ 00:13:22.619 { 00:13:22.619 "name": null, 00:13:22.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.619 "is_configured": false, 00:13:22.619 "data_offset": 0, 00:13:22.619 "data_size": 63488 00:13:22.619 }, 00:13:22.619 { 00:13:22.619 "name": "BaseBdev2", 00:13:22.619 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:22.619 "is_configured": true, 00:13:22.619 "data_offset": 2048, 00:13:22.619 "data_size": 63488 00:13:22.619 } 00:13:22.619 ] 00:13:22.619 }' 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.619 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.939 [2024-11-26 20:26:16.222639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.939 [2024-11-26 20:26:16.222860] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.939 [2024-11-26 20:26:16.222929] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:22.939 request: 00:13:22.939 { 00:13:22.939 "base_bdev": "BaseBdev1", 00:13:22.939 "raid_bdev": "raid_bdev1", 00:13:22.939 "method": "bdev_raid_add_base_bdev", 00:13:22.939 "req_id": 1 00:13:22.939 } 00:13:22.939 Got JSON-RPC error response 00:13:22.939 response: 00:13:22.939 { 00:13:22.939 "code": -22, 00:13:22.939 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:22.939 } 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.939 20:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.885 "name": "raid_bdev1", 00:13:23.885 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:23.885 "strip_size_kb": 0, 00:13:23.885 "state": "online", 00:13:23.885 "raid_level": "raid1", 00:13:23.885 "superblock": true, 00:13:23.885 "num_base_bdevs": 2, 00:13:23.885 "num_base_bdevs_discovered": 1, 00:13:23.885 "num_base_bdevs_operational": 1, 00:13:23.885 "base_bdevs_list": [ 00:13:23.885 { 00:13:23.885 "name": null, 00:13:23.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.885 "is_configured": false, 00:13:23.885 "data_offset": 0, 00:13:23.885 "data_size": 63488 00:13:23.885 }, 00:13:23.885 { 00:13:23.885 "name": "BaseBdev2", 00:13:23.885 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:23.885 "is_configured": true, 00:13:23.885 "data_offset": 2048, 00:13:23.885 "data_size": 63488 00:13:23.885 } 00:13:23.885 ] 00:13:23.885 }' 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.885 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.455 "name": "raid_bdev1", 00:13:24.455 "uuid": "64c3663c-821e-4c29-a37e-a6a7c78d7b46", 00:13:24.455 "strip_size_kb": 0, 00:13:24.455 "state": "online", 00:13:24.455 "raid_level": "raid1", 00:13:24.455 "superblock": true, 00:13:24.455 "num_base_bdevs": 2, 00:13:24.455 "num_base_bdevs_discovered": 1, 00:13:24.455 "num_base_bdevs_operational": 1, 00:13:24.455 "base_bdevs_list": [ 00:13:24.455 { 00:13:24.455 "name": null, 00:13:24.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.455 "is_configured": false, 00:13:24.455 "data_offset": 0, 00:13:24.455 "data_size": 63488 00:13:24.455 }, 00:13:24.455 { 00:13:24.455 "name": "BaseBdev2", 00:13:24.455 "uuid": "fed8cf25-d09f-50c3-98ba-b4471a44370b", 00:13:24.455 "is_configured": true, 00:13:24.455 "data_offset": 2048, 00:13:24.455 "data_size": 63488 00:13:24.455 } 00:13:24.455 ] 00:13:24.455 }' 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 88033 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 88033 ']' 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 88033 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88033 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88033' 00:13:24.455 killing process with pid 88033 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 88033 00:13:24.455 Received shutdown signal, test time was about 17.074276 seconds 00:13:24.455 00:13:24.455 Latency(us) 00:13:24.455 [2024-11-26T20:26:18.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.455 [2024-11-26T20:26:18.007Z] =================================================================================================================== 00:13:24.455 [2024-11-26T20:26:18.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:24.455 [2024-11-26 20:26:17.907994] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.455 20:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 88033 00:13:24.455 [2024-11-26 20:26:17.908175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.455 [2024-11-26 20:26:17.908267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.455 [2024-11-26 20:26:17.908280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:24.455 [2024-11-26 20:26:17.950114] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:25.025 00:13:25.025 real 0m19.205s 00:13:25.025 user 0m25.674s 00:13:25.025 sys 0m2.238s 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 ************************************ 00:13:25.025 END TEST raid_rebuild_test_sb_io 00:13:25.025 ************************************ 00:13:25.025 20:26:18 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:25.025 20:26:18 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:25.025 20:26:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:25.025 20:26:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.025 20:26:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 ************************************ 00:13:25.025 START TEST raid_rebuild_test 00:13:25.025 ************************************ 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88711 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88711 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88711 ']' 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.025 20:26:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.025 [2024-11-26 20:26:18.475889] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:25.025 [2024-11-26 20:26:18.476126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88711 ] 00:13:25.025 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:25.025 Zero copy mechanism will not be used. 00:13:25.376 [2024-11-26 20:26:18.621315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.376 [2024-11-26 20:26:18.705571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.376 [2024-11-26 20:26:18.778271] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.376 [2024-11-26 20:26:18.778317] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 BaseBdev1_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [2024-11-26 20:26:19.363805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:25.996 [2024-11-26 20:26:19.363889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.996 [2024-11-26 20:26:19.363917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:25.996 [2024-11-26 20:26:19.363934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.996 [2024-11-26 20:26:19.366247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.996 [2024-11-26 20:26:19.366289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:25.996 BaseBdev1 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 BaseBdev2_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [2024-11-26 20:26:19.396794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:25.996 [2024-11-26 20:26:19.396941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.996 [2024-11-26 20:26:19.396997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:25.996 [2024-11-26 20:26:19.397051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.996 [2024-11-26 20:26:19.400082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.996 [2024-11-26 20:26:19.400176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:25.996 BaseBdev2 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 BaseBdev3_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [2024-11-26 20:26:19.427427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:25.996 [2024-11-26 20:26:19.427527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.996 [2024-11-26 20:26:19.427568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:25.996 [2024-11-26 20:26:19.427595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.996 [2024-11-26 20:26:19.429777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.996 [2024-11-26 20:26:19.429850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:25.996 BaseBdev3 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 BaseBdev4_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [2024-11-26 20:26:19.457827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:25.996 [2024-11-26 20:26:19.457935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.996 [2024-11-26 20:26:19.457984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:25.996 [2024-11-26 20:26:19.458026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.996 [2024-11-26 20:26:19.460125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.996 [2024-11-26 20:26:19.460199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:25.996 BaseBdev4 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 spare_malloc 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 spare_delay 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.996 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [2024-11-26 20:26:19.499707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:25.996 [2024-11-26 20:26:19.499826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.997 [2024-11-26 20:26:19.499874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:25.997 [2024-11-26 20:26:19.499905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.997 [2024-11-26 20:26:19.502318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.997 [2024-11-26 20:26:19.502395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:25.997 spare 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.997 [2024-11-26 20:26:19.511769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.997 [2024-11-26 20:26:19.513755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.997 [2024-11-26 20:26:19.513869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.997 [2024-11-26 20:26:19.513956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.997 [2024-11-26 20:26:19.514091] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:25.997 [2024-11-26 20:26:19.514130] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:25.997 [2024-11-26 20:26:19.514410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:25.997 [2024-11-26 20:26:19.514616] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:25.997 [2024-11-26 20:26:19.514686] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:25.997 [2024-11-26 20:26:19.514882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.997 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.256 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.256 "name": "raid_bdev1", 00:13:26.256 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:26.256 "strip_size_kb": 0, 00:13:26.256 "state": "online", 00:13:26.256 "raid_level": "raid1", 00:13:26.256 "superblock": false, 00:13:26.256 "num_base_bdevs": 4, 00:13:26.256 "num_base_bdevs_discovered": 4, 00:13:26.256 "num_base_bdevs_operational": 4, 00:13:26.256 "base_bdevs_list": [ 00:13:26.256 { 00:13:26.256 "name": "BaseBdev1", 00:13:26.256 "uuid": "b36db0d0-9c2b-57c0-a57c-a58410bdf62b", 00:13:26.256 "is_configured": true, 00:13:26.256 "data_offset": 0, 00:13:26.256 "data_size": 65536 00:13:26.256 }, 00:13:26.256 { 00:13:26.256 "name": "BaseBdev2", 00:13:26.256 "uuid": "062bf012-60bb-5747-a8cd-966fa818e284", 00:13:26.256 "is_configured": true, 00:13:26.256 "data_offset": 0, 00:13:26.256 "data_size": 65536 00:13:26.256 }, 00:13:26.256 { 00:13:26.256 "name": "BaseBdev3", 00:13:26.256 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:26.256 "is_configured": true, 00:13:26.256 "data_offset": 0, 00:13:26.256 "data_size": 65536 00:13:26.256 }, 00:13:26.256 { 00:13:26.256 "name": "BaseBdev4", 00:13:26.256 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:26.256 "is_configured": true, 00:13:26.256 "data_offset": 0, 00:13:26.256 "data_size": 65536 00:13:26.256 } 00:13:26.256 ] 00:13:26.256 }' 00:13:26.256 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.256 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.515 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.515 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.515 20:26:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.515 20:26:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:26.515 [2024-11-26 20:26:19.999338] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.515 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.515 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:26.515 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.515 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.515 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.515 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:26.515 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:26.775 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:26.775 [2024-11-26 20:26:20.318527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:27.035 /dev/nbd0 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.035 1+0 records in 00:13:27.035 1+0 records out 00:13:27.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635044 s, 6.4 MB/s 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:27.035 20:26:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:33.599 65536+0 records in 00:13:33.599 65536+0 records out 00:13:33.599 33554432 bytes (34 MB, 32 MiB) copied, 5.79894 s, 5.8 MB/s 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.599 [2024-11-26 20:26:26.429808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.599 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.600 [2024-11-26 20:26:26.447604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.600 "name": "raid_bdev1", 00:13:33.600 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:33.600 "strip_size_kb": 0, 00:13:33.600 "state": "online", 00:13:33.600 "raid_level": "raid1", 00:13:33.600 "superblock": false, 00:13:33.600 "num_base_bdevs": 4, 00:13:33.600 "num_base_bdevs_discovered": 3, 00:13:33.600 "num_base_bdevs_operational": 3, 00:13:33.600 "base_bdevs_list": [ 00:13:33.600 { 00:13:33.600 "name": null, 00:13:33.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.600 "is_configured": false, 00:13:33.600 "data_offset": 0, 00:13:33.600 "data_size": 65536 00:13:33.600 }, 00:13:33.600 { 00:13:33.600 "name": "BaseBdev2", 00:13:33.600 "uuid": "062bf012-60bb-5747-a8cd-966fa818e284", 00:13:33.600 "is_configured": true, 00:13:33.600 "data_offset": 0, 00:13:33.600 "data_size": 65536 00:13:33.600 }, 00:13:33.600 { 00:13:33.600 "name": "BaseBdev3", 00:13:33.600 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:33.600 "is_configured": true, 00:13:33.600 "data_offset": 0, 00:13:33.600 "data_size": 65536 00:13:33.600 }, 00:13:33.600 { 00:13:33.600 "name": "BaseBdev4", 00:13:33.600 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:33.600 "is_configured": true, 00:13:33.600 "data_offset": 0, 00:13:33.600 "data_size": 65536 00:13:33.600 } 00:13:33.600 ] 00:13:33.600 }' 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.600 [2024-11-26 20:26:26.898874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.600 [2024-11-26 20:26:26.902446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:33.600 [2024-11-26 20:26:26.904496] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.600 20:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.537 "name": "raid_bdev1", 00:13:34.537 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:34.537 "strip_size_kb": 0, 00:13:34.537 "state": "online", 00:13:34.537 "raid_level": "raid1", 00:13:34.537 "superblock": false, 00:13:34.537 "num_base_bdevs": 4, 00:13:34.537 "num_base_bdevs_discovered": 4, 00:13:34.537 "num_base_bdevs_operational": 4, 00:13:34.537 "process": { 00:13:34.537 "type": "rebuild", 00:13:34.537 "target": "spare", 00:13:34.537 "progress": { 00:13:34.537 "blocks": 20480, 00:13:34.537 "percent": 31 00:13:34.537 } 00:13:34.537 }, 00:13:34.537 "base_bdevs_list": [ 00:13:34.537 { 00:13:34.537 "name": "spare", 00:13:34.537 "uuid": "8250e7f3-74bc-5d3b-8c30-17e6950a6405", 00:13:34.537 "is_configured": true, 00:13:34.537 "data_offset": 0, 00:13:34.537 "data_size": 65536 00:13:34.537 }, 00:13:34.537 { 00:13:34.537 "name": "BaseBdev2", 00:13:34.537 "uuid": "062bf012-60bb-5747-a8cd-966fa818e284", 00:13:34.537 "is_configured": true, 00:13:34.537 "data_offset": 0, 00:13:34.537 "data_size": 65536 00:13:34.537 }, 00:13:34.537 { 00:13:34.537 "name": "BaseBdev3", 00:13:34.537 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:34.537 "is_configured": true, 00:13:34.537 "data_offset": 0, 00:13:34.537 "data_size": 65536 00:13:34.537 }, 00:13:34.537 { 00:13:34.537 "name": "BaseBdev4", 00:13:34.537 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:34.537 "is_configured": true, 00:13:34.537 "data_offset": 0, 00:13:34.537 "data_size": 65536 00:13:34.537 } 00:13:34.537 ] 00:13:34.537 }' 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.537 20:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.537 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.537 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.537 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:34.537 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.537 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.537 [2024-11-26 20:26:28.035689] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.797 [2024-11-26 20:26:28.112185] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.797 [2024-11-26 20:26:28.112279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.797 [2024-11-26 20:26:28.112317] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.797 [2024-11-26 20:26:28.112326] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.797 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.797 "name": "raid_bdev1", 00:13:34.797 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:34.797 "strip_size_kb": 0, 00:13:34.797 "state": "online", 00:13:34.797 "raid_level": "raid1", 00:13:34.797 "superblock": false, 00:13:34.797 "num_base_bdevs": 4, 00:13:34.797 "num_base_bdevs_discovered": 3, 00:13:34.797 "num_base_bdevs_operational": 3, 00:13:34.797 "base_bdevs_list": [ 00:13:34.797 { 00:13:34.797 "name": null, 00:13:34.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.797 "is_configured": false, 00:13:34.797 "data_offset": 0, 00:13:34.797 "data_size": 65536 00:13:34.797 }, 00:13:34.797 { 00:13:34.797 "name": "BaseBdev2", 00:13:34.797 "uuid": "062bf012-60bb-5747-a8cd-966fa818e284", 00:13:34.797 "is_configured": true, 00:13:34.797 "data_offset": 0, 00:13:34.797 "data_size": 65536 00:13:34.797 }, 00:13:34.797 { 00:13:34.797 "name": "BaseBdev3", 00:13:34.797 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:34.797 "is_configured": true, 00:13:34.797 "data_offset": 0, 00:13:34.797 "data_size": 65536 00:13:34.797 }, 00:13:34.797 { 00:13:34.798 "name": "BaseBdev4", 00:13:34.798 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:34.798 "is_configured": true, 00:13:34.798 "data_offset": 0, 00:13:34.798 "data_size": 65536 00:13:34.798 } 00:13:34.798 ] 00:13:34.798 }' 00:13:34.798 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.798 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.057 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.387 "name": "raid_bdev1", 00:13:35.387 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:35.387 "strip_size_kb": 0, 00:13:35.387 "state": "online", 00:13:35.387 "raid_level": "raid1", 00:13:35.387 "superblock": false, 00:13:35.387 "num_base_bdevs": 4, 00:13:35.387 "num_base_bdevs_discovered": 3, 00:13:35.387 "num_base_bdevs_operational": 3, 00:13:35.387 "base_bdevs_list": [ 00:13:35.387 { 00:13:35.387 "name": null, 00:13:35.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.387 "is_configured": false, 00:13:35.387 "data_offset": 0, 00:13:35.387 "data_size": 65536 00:13:35.387 }, 00:13:35.387 { 00:13:35.387 "name": "BaseBdev2", 00:13:35.387 "uuid": "062bf012-60bb-5747-a8cd-966fa818e284", 00:13:35.387 "is_configured": true, 00:13:35.387 "data_offset": 0, 00:13:35.387 "data_size": 65536 00:13:35.387 }, 00:13:35.387 { 00:13:35.387 "name": "BaseBdev3", 00:13:35.387 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:35.387 "is_configured": true, 00:13:35.387 "data_offset": 0, 00:13:35.387 "data_size": 65536 00:13:35.387 }, 00:13:35.387 { 00:13:35.387 "name": "BaseBdev4", 00:13:35.387 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:35.387 "is_configured": true, 00:13:35.387 "data_offset": 0, 00:13:35.387 "data_size": 65536 00:13:35.387 } 00:13:35.387 ] 00:13:35.387 }' 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.387 [2024-11-26 20:26:28.705035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.387 [2024-11-26 20:26:28.708500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:35.387 [2024-11-26 20:26:28.710566] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.387 20:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.319 "name": "raid_bdev1", 00:13:36.319 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:36.319 "strip_size_kb": 0, 00:13:36.319 "state": "online", 00:13:36.319 "raid_level": "raid1", 00:13:36.319 "superblock": false, 00:13:36.319 "num_base_bdevs": 4, 00:13:36.319 "num_base_bdevs_discovered": 4, 00:13:36.319 "num_base_bdevs_operational": 4, 00:13:36.319 "process": { 00:13:36.319 "type": "rebuild", 00:13:36.319 "target": "spare", 00:13:36.319 "progress": { 00:13:36.319 "blocks": 20480, 00:13:36.319 "percent": 31 00:13:36.319 } 00:13:36.319 }, 00:13:36.319 "base_bdevs_list": [ 00:13:36.319 { 00:13:36.319 "name": "spare", 00:13:36.319 "uuid": "8250e7f3-74bc-5d3b-8c30-17e6950a6405", 00:13:36.319 "is_configured": true, 00:13:36.319 "data_offset": 0, 00:13:36.319 "data_size": 65536 00:13:36.319 }, 00:13:36.319 { 00:13:36.319 "name": "BaseBdev2", 00:13:36.319 "uuid": "062bf012-60bb-5747-a8cd-966fa818e284", 00:13:36.319 "is_configured": true, 00:13:36.319 "data_offset": 0, 00:13:36.319 "data_size": 65536 00:13:36.319 }, 00:13:36.319 { 00:13:36.319 "name": "BaseBdev3", 00:13:36.319 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:36.319 "is_configured": true, 00:13:36.319 "data_offset": 0, 00:13:36.319 "data_size": 65536 00:13:36.319 }, 00:13:36.319 { 00:13:36.319 "name": "BaseBdev4", 00:13:36.319 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:36.319 "is_configured": true, 00:13:36.319 "data_offset": 0, 00:13:36.319 "data_size": 65536 00:13:36.319 } 00:13:36.319 ] 00:13:36.319 }' 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.319 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.319 [2024-11-26 20:26:29.857816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.578 [2024-11-26 20:26:29.917677] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.578 "name": "raid_bdev1", 00:13:36.578 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:36.578 "strip_size_kb": 0, 00:13:36.578 "state": "online", 00:13:36.578 "raid_level": "raid1", 00:13:36.578 "superblock": false, 00:13:36.578 "num_base_bdevs": 4, 00:13:36.578 "num_base_bdevs_discovered": 3, 00:13:36.578 "num_base_bdevs_operational": 3, 00:13:36.578 "process": { 00:13:36.578 "type": "rebuild", 00:13:36.578 "target": "spare", 00:13:36.578 "progress": { 00:13:36.578 "blocks": 24576, 00:13:36.578 "percent": 37 00:13:36.578 } 00:13:36.578 }, 00:13:36.578 "base_bdevs_list": [ 00:13:36.578 { 00:13:36.578 "name": "spare", 00:13:36.578 "uuid": "8250e7f3-74bc-5d3b-8c30-17e6950a6405", 00:13:36.578 "is_configured": true, 00:13:36.578 "data_offset": 0, 00:13:36.578 "data_size": 65536 00:13:36.578 }, 00:13:36.578 { 00:13:36.578 "name": null, 00:13:36.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.578 "is_configured": false, 00:13:36.578 "data_offset": 0, 00:13:36.578 "data_size": 65536 00:13:36.578 }, 00:13:36.578 { 00:13:36.578 "name": "BaseBdev3", 00:13:36.578 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:36.578 "is_configured": true, 00:13:36.578 "data_offset": 0, 00:13:36.578 "data_size": 65536 00:13:36.578 }, 00:13:36.578 { 00:13:36.578 "name": "BaseBdev4", 00:13:36.578 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:36.578 "is_configured": true, 00:13:36.578 "data_offset": 0, 00:13:36.578 "data_size": 65536 00:13:36.578 } 00:13:36.578 ] 00:13:36.578 }' 00:13:36.578 20:26:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=379 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.578 "name": "raid_bdev1", 00:13:36.578 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:36.578 "strip_size_kb": 0, 00:13:36.578 "state": "online", 00:13:36.578 "raid_level": "raid1", 00:13:36.578 "superblock": false, 00:13:36.578 "num_base_bdevs": 4, 00:13:36.578 "num_base_bdevs_discovered": 3, 00:13:36.578 "num_base_bdevs_operational": 3, 00:13:36.578 "process": { 00:13:36.578 "type": "rebuild", 00:13:36.578 "target": "spare", 00:13:36.578 "progress": { 00:13:36.578 "blocks": 26624, 00:13:36.578 "percent": 40 00:13:36.578 } 00:13:36.578 }, 00:13:36.578 "base_bdevs_list": [ 00:13:36.578 { 00:13:36.578 "name": "spare", 00:13:36.578 "uuid": "8250e7f3-74bc-5d3b-8c30-17e6950a6405", 00:13:36.578 "is_configured": true, 00:13:36.578 "data_offset": 0, 00:13:36.578 "data_size": 65536 00:13:36.578 }, 00:13:36.578 { 00:13:36.578 "name": null, 00:13:36.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.578 "is_configured": false, 00:13:36.578 "data_offset": 0, 00:13:36.578 "data_size": 65536 00:13:36.578 }, 00:13:36.578 { 00:13:36.578 "name": "BaseBdev3", 00:13:36.578 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:36.578 "is_configured": true, 00:13:36.578 "data_offset": 0, 00:13:36.578 "data_size": 65536 00:13:36.578 }, 00:13:36.578 { 00:13:36.578 "name": "BaseBdev4", 00:13:36.578 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:36.578 "is_configured": true, 00:13:36.578 "data_offset": 0, 00:13:36.578 "data_size": 65536 00:13:36.578 } 00:13:36.578 ] 00:13:36.578 }' 00:13:36.578 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.836 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.836 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.836 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.836 20:26:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.773 "name": "raid_bdev1", 00:13:37.773 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:37.773 "strip_size_kb": 0, 00:13:37.773 "state": "online", 00:13:37.773 "raid_level": "raid1", 00:13:37.773 "superblock": false, 00:13:37.773 "num_base_bdevs": 4, 00:13:37.773 "num_base_bdevs_discovered": 3, 00:13:37.773 "num_base_bdevs_operational": 3, 00:13:37.773 "process": { 00:13:37.773 "type": "rebuild", 00:13:37.773 "target": "spare", 00:13:37.773 "progress": { 00:13:37.773 "blocks": 49152, 00:13:37.773 "percent": 75 00:13:37.773 } 00:13:37.773 }, 00:13:37.773 "base_bdevs_list": [ 00:13:37.773 { 00:13:37.773 "name": "spare", 00:13:37.773 "uuid": "8250e7f3-74bc-5d3b-8c30-17e6950a6405", 00:13:37.773 "is_configured": true, 00:13:37.773 "data_offset": 0, 00:13:37.773 "data_size": 65536 00:13:37.773 }, 00:13:37.773 { 00:13:37.773 "name": null, 00:13:37.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.773 "is_configured": false, 00:13:37.773 "data_offset": 0, 00:13:37.773 "data_size": 65536 00:13:37.773 }, 00:13:37.773 { 00:13:37.773 "name": "BaseBdev3", 00:13:37.773 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:37.773 "is_configured": true, 00:13:37.773 "data_offset": 0, 00:13:37.773 "data_size": 65536 00:13:37.773 }, 00:13:37.773 { 00:13:37.773 "name": "BaseBdev4", 00:13:37.773 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:37.773 "is_configured": true, 00:13:37.773 "data_offset": 0, 00:13:37.773 "data_size": 65536 00:13:37.773 } 00:13:37.773 ] 00:13:37.773 }' 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.773 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.032 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.032 20:26:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.601 [2024-11-26 20:26:31.929000] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:38.601 [2024-11-26 20:26:31.929112] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:38.601 [2024-11-26 20:26:31.929175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.861 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.120 "name": "raid_bdev1", 00:13:39.120 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:39.120 "strip_size_kb": 0, 00:13:39.120 "state": "online", 00:13:39.120 "raid_level": "raid1", 00:13:39.120 "superblock": false, 00:13:39.120 "num_base_bdevs": 4, 00:13:39.120 "num_base_bdevs_discovered": 3, 00:13:39.120 "num_base_bdevs_operational": 3, 00:13:39.120 "base_bdevs_list": [ 00:13:39.120 { 00:13:39.120 "name": "spare", 00:13:39.120 "uuid": "8250e7f3-74bc-5d3b-8c30-17e6950a6405", 00:13:39.120 "is_configured": true, 00:13:39.120 "data_offset": 0, 00:13:39.120 "data_size": 65536 00:13:39.120 }, 00:13:39.120 { 00:13:39.120 "name": null, 00:13:39.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.120 "is_configured": false, 00:13:39.120 "data_offset": 0, 00:13:39.120 "data_size": 65536 00:13:39.120 }, 00:13:39.120 { 00:13:39.120 "name": "BaseBdev3", 00:13:39.120 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:39.120 "is_configured": true, 00:13:39.120 "data_offset": 0, 00:13:39.120 "data_size": 65536 00:13:39.120 }, 00:13:39.120 { 00:13:39.120 "name": "BaseBdev4", 00:13:39.120 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:39.120 "is_configured": true, 00:13:39.120 "data_offset": 0, 00:13:39.120 "data_size": 65536 00:13:39.120 } 00:13:39.120 ] 00:13:39.120 }' 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.120 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.121 "name": "raid_bdev1", 00:13:39.121 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:39.121 "strip_size_kb": 0, 00:13:39.121 "state": "online", 00:13:39.121 "raid_level": "raid1", 00:13:39.121 "superblock": false, 00:13:39.121 "num_base_bdevs": 4, 00:13:39.121 "num_base_bdevs_discovered": 3, 00:13:39.121 "num_base_bdevs_operational": 3, 00:13:39.121 "base_bdevs_list": [ 00:13:39.121 { 00:13:39.121 "name": "spare", 00:13:39.121 "uuid": "8250e7f3-74bc-5d3b-8c30-17e6950a6405", 00:13:39.121 "is_configured": true, 00:13:39.121 "data_offset": 0, 00:13:39.121 "data_size": 65536 00:13:39.121 }, 00:13:39.121 { 00:13:39.121 "name": null, 00:13:39.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.121 "is_configured": false, 00:13:39.121 "data_offset": 0, 00:13:39.121 "data_size": 65536 00:13:39.121 }, 00:13:39.121 { 00:13:39.121 "name": "BaseBdev3", 00:13:39.121 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:39.121 "is_configured": true, 00:13:39.121 "data_offset": 0, 00:13:39.121 "data_size": 65536 00:13:39.121 }, 00:13:39.121 { 00:13:39.121 "name": "BaseBdev4", 00:13:39.121 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:39.121 "is_configured": true, 00:13:39.121 "data_offset": 0, 00:13:39.121 "data_size": 65536 00:13:39.121 } 00:13:39.121 ] 00:13:39.121 }' 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.121 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.380 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.380 "name": "raid_bdev1", 00:13:39.380 "uuid": "02646963-c681-452c-8f80-ba2936cc60eb", 00:13:39.380 "strip_size_kb": 0, 00:13:39.380 "state": "online", 00:13:39.380 "raid_level": "raid1", 00:13:39.380 "superblock": false, 00:13:39.380 "num_base_bdevs": 4, 00:13:39.380 "num_base_bdevs_discovered": 3, 00:13:39.380 "num_base_bdevs_operational": 3, 00:13:39.380 "base_bdevs_list": [ 00:13:39.380 { 00:13:39.380 "name": "spare", 00:13:39.380 "uuid": "8250e7f3-74bc-5d3b-8c30-17e6950a6405", 00:13:39.380 "is_configured": true, 00:13:39.380 "data_offset": 0, 00:13:39.380 "data_size": 65536 00:13:39.380 }, 00:13:39.380 { 00:13:39.380 "name": null, 00:13:39.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.380 "is_configured": false, 00:13:39.380 "data_offset": 0, 00:13:39.380 "data_size": 65536 00:13:39.380 }, 00:13:39.380 { 00:13:39.380 "name": "BaseBdev3", 00:13:39.380 "uuid": "95bcb534-687e-51bf-a656-ca7a77bd819f", 00:13:39.380 "is_configured": true, 00:13:39.380 "data_offset": 0, 00:13:39.380 "data_size": 65536 00:13:39.380 }, 00:13:39.380 { 00:13:39.380 "name": "BaseBdev4", 00:13:39.380 "uuid": "b9f35298-0ae0-53be-b0bb-9b3b2147df27", 00:13:39.380 "is_configured": true, 00:13:39.380 "data_offset": 0, 00:13:39.380 "data_size": 65536 00:13:39.380 } 00:13:39.380 ] 00:13:39.380 }' 00:13:39.380 20:26:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.380 20:26:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.639 [2024-11-26 20:26:33.112802] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.639 [2024-11-26 20:26:33.112852] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.639 [2024-11-26 20:26:33.112959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.639 [2024-11-26 20:26:33.113055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.639 [2024-11-26 20:26:33.113075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:39.639 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:39.898 /dev/nbd0 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.175 1+0 records in 00:13:40.175 1+0 records out 00:13:40.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280895 s, 14.6 MB/s 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:40.175 /dev/nbd1 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:40.175 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.175 1+0 records in 00:13:40.175 1+0 records out 00:13:40.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291971 s, 14.0 MB/s 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.437 20:26:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.697 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88711 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88711 ']' 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88711 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88711 00:13:40.956 killing process with pid 88711 00:13:40.956 Received shutdown signal, test time was about 60.000000 seconds 00:13:40.956 00:13:40.956 Latency(us) 00:13:40.956 [2024-11-26T20:26:34.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.956 [2024-11-26T20:26:34.508Z] =================================================================================================================== 00:13:40.956 [2024-11-26T20:26:34.508Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88711' 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88711 00:13:40.956 [2024-11-26 20:26:34.368431] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:40.956 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88711 00:13:40.956 [2024-11-26 20:26:34.448449] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:41.525 00:13:41.525 real 0m16.421s 00:13:41.525 user 0m18.827s 00:13:41.525 sys 0m3.159s 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.525 ************************************ 00:13:41.525 END TEST raid_rebuild_test 00:13:41.525 ************************************ 00:13:41.525 20:26:34 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:41.525 20:26:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:41.525 20:26:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.525 20:26:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.525 ************************************ 00:13:41.525 START TEST raid_rebuild_test_sb 00:13:41.525 ************************************ 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:41.525 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=89146 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 89146 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 89146 ']' 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:41.526 20:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.526 [2024-11-26 20:26:34.975983] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:41.526 [2024-11-26 20:26:34.976148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89146 ] 00:13:41.526 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:41.526 Zero copy mechanism will not be used. 00:13:41.785 [2024-11-26 20:26:35.136437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.785 [2024-11-26 20:26:35.220052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.785 [2024-11-26 20:26:35.293042] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.785 [2024-11-26 20:26:35.293077] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.355 BaseBdev1_malloc 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.355 [2024-11-26 20:26:35.863299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:42.355 [2024-11-26 20:26:35.863374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.355 [2024-11-26 20:26:35.863421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:42.355 [2024-11-26 20:26:35.863443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.355 [2024-11-26 20:26:35.865901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.355 [2024-11-26 20:26:35.865945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:42.355 BaseBdev1 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.355 BaseBdev2_malloc 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.355 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.355 [2024-11-26 20:26:35.905151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:42.355 [2024-11-26 20:26:35.905254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.355 [2024-11-26 20:26:35.905294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:42.355 [2024-11-26 20:26:35.905314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.614 [2024-11-26 20:26:35.908467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.614 [2024-11-26 20:26:35.908518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:42.614 BaseBdev2 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.614 BaseBdev3_malloc 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.614 [2024-11-26 20:26:35.940591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:42.614 [2024-11-26 20:26:35.940685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.614 [2024-11-26 20:26:35.940717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:42.614 [2024-11-26 20:26:35.940728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.614 [2024-11-26 20:26:35.943131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.614 [2024-11-26 20:26:35.943173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:42.614 BaseBdev3 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:42.614 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.615 BaseBdev4_malloc 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.615 [2024-11-26 20:26:35.971124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:42.615 [2024-11-26 20:26:35.971193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.615 [2024-11-26 20:26:35.971220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:42.615 [2024-11-26 20:26:35.971236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.615 [2024-11-26 20:26:35.973425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.615 [2024-11-26 20:26:35.973467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:42.615 BaseBdev4 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.615 spare_malloc 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.615 20:26:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.615 spare_delay 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.615 [2024-11-26 20:26:36.013349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:42.615 [2024-11-26 20:26:36.013418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.615 [2024-11-26 20:26:36.013441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:42.615 [2024-11-26 20:26:36.013455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.615 [2024-11-26 20:26:36.015686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.615 [2024-11-26 20:26:36.015724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:42.615 spare 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.615 [2024-11-26 20:26:36.025417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.615 [2024-11-26 20:26:36.027243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.615 [2024-11-26 20:26:36.027320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.615 [2024-11-26 20:26:36.027371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:42.615 [2024-11-26 20:26:36.027543] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:42.615 [2024-11-26 20:26:36.027566] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.615 [2024-11-26 20:26:36.027841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:42.615 [2024-11-26 20:26:36.028008] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:42.615 [2024-11-26 20:26:36.028028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:42.615 [2024-11-26 20:26:36.028162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.615 "name": "raid_bdev1", 00:13:42.615 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:42.615 "strip_size_kb": 0, 00:13:42.615 "state": "online", 00:13:42.615 "raid_level": "raid1", 00:13:42.615 "superblock": true, 00:13:42.615 "num_base_bdevs": 4, 00:13:42.615 "num_base_bdevs_discovered": 4, 00:13:42.615 "num_base_bdevs_operational": 4, 00:13:42.615 "base_bdevs_list": [ 00:13:42.615 { 00:13:42.615 "name": "BaseBdev1", 00:13:42.615 "uuid": "40e978c6-af6a-5152-b0b8-0a2271adc67d", 00:13:42.615 "is_configured": true, 00:13:42.615 "data_offset": 2048, 00:13:42.615 "data_size": 63488 00:13:42.615 }, 00:13:42.615 { 00:13:42.615 "name": "BaseBdev2", 00:13:42.615 "uuid": "938fa21c-a105-5d90-a154-baa1627e1e80", 00:13:42.615 "is_configured": true, 00:13:42.615 "data_offset": 2048, 00:13:42.615 "data_size": 63488 00:13:42.615 }, 00:13:42.615 { 00:13:42.615 "name": "BaseBdev3", 00:13:42.615 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:42.615 "is_configured": true, 00:13:42.615 "data_offset": 2048, 00:13:42.615 "data_size": 63488 00:13:42.615 }, 00:13:42.615 { 00:13:42.615 "name": "BaseBdev4", 00:13:42.615 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:42.615 "is_configured": true, 00:13:42.615 "data_offset": 2048, 00:13:42.615 "data_size": 63488 00:13:42.615 } 00:13:42.615 ] 00:13:42.615 }' 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.615 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.182 [2024-11-26 20:26:36.529024] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.182 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:43.441 [2024-11-26 20:26:36.804203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:43.441 /dev/nbd0 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:43.441 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.441 1+0 records in 00:13:43.441 1+0 records out 00:13:43.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256497 s, 16.0 MB/s 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:43.442 20:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:50.031 63488+0 records in 00:13:50.031 63488+0 records out 00:13:50.031 32505856 bytes (33 MB, 31 MiB) copied, 5.92453 s, 5.5 MB/s 00:13:50.031 20:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:50.031 20:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.031 20:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.031 20:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.031 20:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:50.031 20:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.031 20:26:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.031 [2024-11-26 20:26:43.059866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.031 [2024-11-26 20:26:43.095886] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.031 "name": "raid_bdev1", 00:13:50.031 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:50.031 "strip_size_kb": 0, 00:13:50.031 "state": "online", 00:13:50.031 "raid_level": "raid1", 00:13:50.031 "superblock": true, 00:13:50.031 "num_base_bdevs": 4, 00:13:50.031 "num_base_bdevs_discovered": 3, 00:13:50.031 "num_base_bdevs_operational": 3, 00:13:50.031 "base_bdevs_list": [ 00:13:50.031 { 00:13:50.031 "name": null, 00:13:50.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.031 "is_configured": false, 00:13:50.031 "data_offset": 0, 00:13:50.031 "data_size": 63488 00:13:50.031 }, 00:13:50.031 { 00:13:50.031 "name": "BaseBdev2", 00:13:50.031 "uuid": "938fa21c-a105-5d90-a154-baa1627e1e80", 00:13:50.031 "is_configured": true, 00:13:50.031 "data_offset": 2048, 00:13:50.031 "data_size": 63488 00:13:50.031 }, 00:13:50.031 { 00:13:50.031 "name": "BaseBdev3", 00:13:50.031 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:50.031 "is_configured": true, 00:13:50.031 "data_offset": 2048, 00:13:50.031 "data_size": 63488 00:13:50.031 }, 00:13:50.031 { 00:13:50.031 "name": "BaseBdev4", 00:13:50.031 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:50.031 "is_configured": true, 00:13:50.031 "data_offset": 2048, 00:13:50.031 "data_size": 63488 00:13:50.031 } 00:13:50.031 ] 00:13:50.031 }' 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.031 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.032 [2024-11-26 20:26:43.539213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.032 [2024-11-26 20:26:43.542869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:50.032 [2024-11-26 20:26:43.545147] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:50.032 20:26:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.032 20:26:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.407 "name": "raid_bdev1", 00:13:51.407 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:51.407 "strip_size_kb": 0, 00:13:51.407 "state": "online", 00:13:51.407 "raid_level": "raid1", 00:13:51.407 "superblock": true, 00:13:51.407 "num_base_bdevs": 4, 00:13:51.407 "num_base_bdevs_discovered": 4, 00:13:51.407 "num_base_bdevs_operational": 4, 00:13:51.407 "process": { 00:13:51.407 "type": "rebuild", 00:13:51.407 "target": "spare", 00:13:51.407 "progress": { 00:13:51.407 "blocks": 20480, 00:13:51.407 "percent": 32 00:13:51.407 } 00:13:51.407 }, 00:13:51.407 "base_bdevs_list": [ 00:13:51.407 { 00:13:51.407 "name": "spare", 00:13:51.407 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:51.407 "is_configured": true, 00:13:51.407 "data_offset": 2048, 00:13:51.407 "data_size": 63488 00:13:51.407 }, 00:13:51.407 { 00:13:51.407 "name": "BaseBdev2", 00:13:51.407 "uuid": "938fa21c-a105-5d90-a154-baa1627e1e80", 00:13:51.407 "is_configured": true, 00:13:51.407 "data_offset": 2048, 00:13:51.407 "data_size": 63488 00:13:51.407 }, 00:13:51.407 { 00:13:51.407 "name": "BaseBdev3", 00:13:51.407 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:51.407 "is_configured": true, 00:13:51.407 "data_offset": 2048, 00:13:51.407 "data_size": 63488 00:13:51.407 }, 00:13:51.407 { 00:13:51.407 "name": "BaseBdev4", 00:13:51.407 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:51.407 "is_configured": true, 00:13:51.407 "data_offset": 2048, 00:13:51.407 "data_size": 63488 00:13:51.407 } 00:13:51.407 ] 00:13:51.407 }' 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.407 [2024-11-26 20:26:44.684223] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.407 [2024-11-26 20:26:44.752774] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:51.407 [2024-11-26 20:26:44.752897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.407 [2024-11-26 20:26:44.752926] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.407 [2024-11-26 20:26:44.752936] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.407 "name": "raid_bdev1", 00:13:51.407 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:51.407 "strip_size_kb": 0, 00:13:51.407 "state": "online", 00:13:51.407 "raid_level": "raid1", 00:13:51.407 "superblock": true, 00:13:51.407 "num_base_bdevs": 4, 00:13:51.407 "num_base_bdevs_discovered": 3, 00:13:51.407 "num_base_bdevs_operational": 3, 00:13:51.407 "base_bdevs_list": [ 00:13:51.407 { 00:13:51.407 "name": null, 00:13:51.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.407 "is_configured": false, 00:13:51.407 "data_offset": 0, 00:13:51.407 "data_size": 63488 00:13:51.407 }, 00:13:51.407 { 00:13:51.407 "name": "BaseBdev2", 00:13:51.407 "uuid": "938fa21c-a105-5d90-a154-baa1627e1e80", 00:13:51.407 "is_configured": true, 00:13:51.407 "data_offset": 2048, 00:13:51.407 "data_size": 63488 00:13:51.407 }, 00:13:51.407 { 00:13:51.407 "name": "BaseBdev3", 00:13:51.407 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:51.407 "is_configured": true, 00:13:51.407 "data_offset": 2048, 00:13:51.407 "data_size": 63488 00:13:51.407 }, 00:13:51.407 { 00:13:51.407 "name": "BaseBdev4", 00:13:51.407 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:51.407 "is_configured": true, 00:13:51.407 "data_offset": 2048, 00:13:51.407 "data_size": 63488 00:13:51.407 } 00:13:51.407 ] 00:13:51.407 }' 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.407 20:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.976 "name": "raid_bdev1", 00:13:51.976 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:51.976 "strip_size_kb": 0, 00:13:51.976 "state": "online", 00:13:51.976 "raid_level": "raid1", 00:13:51.976 "superblock": true, 00:13:51.976 "num_base_bdevs": 4, 00:13:51.976 "num_base_bdevs_discovered": 3, 00:13:51.976 "num_base_bdevs_operational": 3, 00:13:51.976 "base_bdevs_list": [ 00:13:51.976 { 00:13:51.976 "name": null, 00:13:51.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.976 "is_configured": false, 00:13:51.976 "data_offset": 0, 00:13:51.976 "data_size": 63488 00:13:51.976 }, 00:13:51.976 { 00:13:51.976 "name": "BaseBdev2", 00:13:51.976 "uuid": "938fa21c-a105-5d90-a154-baa1627e1e80", 00:13:51.976 "is_configured": true, 00:13:51.976 "data_offset": 2048, 00:13:51.976 "data_size": 63488 00:13:51.976 }, 00:13:51.976 { 00:13:51.976 "name": "BaseBdev3", 00:13:51.976 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:51.976 "is_configured": true, 00:13:51.976 "data_offset": 2048, 00:13:51.976 "data_size": 63488 00:13:51.976 }, 00:13:51.976 { 00:13:51.976 "name": "BaseBdev4", 00:13:51.976 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:51.976 "is_configured": true, 00:13:51.976 "data_offset": 2048, 00:13:51.976 "data_size": 63488 00:13:51.976 } 00:13:51.976 ] 00:13:51.976 }' 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.976 [2024-11-26 20:26:45.397505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.976 [2024-11-26 20:26:45.401078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:51.976 [2024-11-26 20:26:45.403380] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.976 20:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.913 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.173 "name": "raid_bdev1", 00:13:53.173 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:53.173 "strip_size_kb": 0, 00:13:53.173 "state": "online", 00:13:53.173 "raid_level": "raid1", 00:13:53.173 "superblock": true, 00:13:53.173 "num_base_bdevs": 4, 00:13:53.173 "num_base_bdevs_discovered": 4, 00:13:53.173 "num_base_bdevs_operational": 4, 00:13:53.173 "process": { 00:13:53.173 "type": "rebuild", 00:13:53.173 "target": "spare", 00:13:53.173 "progress": { 00:13:53.173 "blocks": 20480, 00:13:53.173 "percent": 32 00:13:53.173 } 00:13:53.173 }, 00:13:53.173 "base_bdevs_list": [ 00:13:53.173 { 00:13:53.173 "name": "spare", 00:13:53.173 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:53.173 "is_configured": true, 00:13:53.173 "data_offset": 2048, 00:13:53.173 "data_size": 63488 00:13:53.173 }, 00:13:53.173 { 00:13:53.173 "name": "BaseBdev2", 00:13:53.173 "uuid": "938fa21c-a105-5d90-a154-baa1627e1e80", 00:13:53.173 "is_configured": true, 00:13:53.173 "data_offset": 2048, 00:13:53.173 "data_size": 63488 00:13:53.173 }, 00:13:53.173 { 00:13:53.173 "name": "BaseBdev3", 00:13:53.173 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:53.173 "is_configured": true, 00:13:53.173 "data_offset": 2048, 00:13:53.173 "data_size": 63488 00:13:53.173 }, 00:13:53.173 { 00:13:53.173 "name": "BaseBdev4", 00:13:53.173 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:53.173 "is_configured": true, 00:13:53.173 "data_offset": 2048, 00:13:53.173 "data_size": 63488 00:13:53.173 } 00:13:53.173 ] 00:13:53.173 }' 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:53.173 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.173 [2024-11-26 20:26:46.586277] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.173 [2024-11-26 20:26:46.710123] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.173 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.433 "name": "raid_bdev1", 00:13:53.433 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:53.433 "strip_size_kb": 0, 00:13:53.433 "state": "online", 00:13:53.433 "raid_level": "raid1", 00:13:53.433 "superblock": true, 00:13:53.433 "num_base_bdevs": 4, 00:13:53.433 "num_base_bdevs_discovered": 3, 00:13:53.433 "num_base_bdevs_operational": 3, 00:13:53.433 "process": { 00:13:53.433 "type": "rebuild", 00:13:53.433 "target": "spare", 00:13:53.433 "progress": { 00:13:53.433 "blocks": 24576, 00:13:53.433 "percent": 38 00:13:53.433 } 00:13:53.433 }, 00:13:53.433 "base_bdevs_list": [ 00:13:53.433 { 00:13:53.433 "name": "spare", 00:13:53.433 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:53.433 "is_configured": true, 00:13:53.433 "data_offset": 2048, 00:13:53.433 "data_size": 63488 00:13:53.433 }, 00:13:53.433 { 00:13:53.433 "name": null, 00:13:53.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.433 "is_configured": false, 00:13:53.433 "data_offset": 0, 00:13:53.433 "data_size": 63488 00:13:53.433 }, 00:13:53.433 { 00:13:53.433 "name": "BaseBdev3", 00:13:53.433 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:53.433 "is_configured": true, 00:13:53.433 "data_offset": 2048, 00:13:53.433 "data_size": 63488 00:13:53.433 }, 00:13:53.433 { 00:13:53.433 "name": "BaseBdev4", 00:13:53.433 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:53.433 "is_configured": true, 00:13:53.433 "data_offset": 2048, 00:13:53.433 "data_size": 63488 00:13:53.433 } 00:13:53.433 ] 00:13:53.433 }' 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.433 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.433 "name": "raid_bdev1", 00:13:53.433 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:53.433 "strip_size_kb": 0, 00:13:53.433 "state": "online", 00:13:53.433 "raid_level": "raid1", 00:13:53.433 "superblock": true, 00:13:53.433 "num_base_bdevs": 4, 00:13:53.433 "num_base_bdevs_discovered": 3, 00:13:53.433 "num_base_bdevs_operational": 3, 00:13:53.433 "process": { 00:13:53.433 "type": "rebuild", 00:13:53.433 "target": "spare", 00:13:53.434 "progress": { 00:13:53.434 "blocks": 26624, 00:13:53.434 "percent": 41 00:13:53.434 } 00:13:53.434 }, 00:13:53.434 "base_bdevs_list": [ 00:13:53.434 { 00:13:53.434 "name": "spare", 00:13:53.434 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:53.434 "is_configured": true, 00:13:53.434 "data_offset": 2048, 00:13:53.434 "data_size": 63488 00:13:53.434 }, 00:13:53.434 { 00:13:53.434 "name": null, 00:13:53.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.434 "is_configured": false, 00:13:53.434 "data_offset": 0, 00:13:53.434 "data_size": 63488 00:13:53.434 }, 00:13:53.434 { 00:13:53.434 "name": "BaseBdev3", 00:13:53.434 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:53.434 "is_configured": true, 00:13:53.434 "data_offset": 2048, 00:13:53.434 "data_size": 63488 00:13:53.434 }, 00:13:53.434 { 00:13:53.434 "name": "BaseBdev4", 00:13:53.434 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:53.434 "is_configured": true, 00:13:53.434 "data_offset": 2048, 00:13:53.434 "data_size": 63488 00:13:53.434 } 00:13:53.434 ] 00:13:53.434 }' 00:13:53.434 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.434 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.434 20:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.692 20:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.692 20:26:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.628 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.628 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.628 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.628 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.629 "name": "raid_bdev1", 00:13:54.629 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:54.629 "strip_size_kb": 0, 00:13:54.629 "state": "online", 00:13:54.629 "raid_level": "raid1", 00:13:54.629 "superblock": true, 00:13:54.629 "num_base_bdevs": 4, 00:13:54.629 "num_base_bdevs_discovered": 3, 00:13:54.629 "num_base_bdevs_operational": 3, 00:13:54.629 "process": { 00:13:54.629 "type": "rebuild", 00:13:54.629 "target": "spare", 00:13:54.629 "progress": { 00:13:54.629 "blocks": 51200, 00:13:54.629 "percent": 80 00:13:54.629 } 00:13:54.629 }, 00:13:54.629 "base_bdevs_list": [ 00:13:54.629 { 00:13:54.629 "name": "spare", 00:13:54.629 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:54.629 "is_configured": true, 00:13:54.629 "data_offset": 2048, 00:13:54.629 "data_size": 63488 00:13:54.629 }, 00:13:54.629 { 00:13:54.629 "name": null, 00:13:54.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.629 "is_configured": false, 00:13:54.629 "data_offset": 0, 00:13:54.629 "data_size": 63488 00:13:54.629 }, 00:13:54.629 { 00:13:54.629 "name": "BaseBdev3", 00:13:54.629 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:54.629 "is_configured": true, 00:13:54.629 "data_offset": 2048, 00:13:54.629 "data_size": 63488 00:13:54.629 }, 00:13:54.629 { 00:13:54.629 "name": "BaseBdev4", 00:13:54.629 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:54.629 "is_configured": true, 00:13:54.629 "data_offset": 2048, 00:13:54.629 "data_size": 63488 00:13:54.629 } 00:13:54.629 ] 00:13:54.629 }' 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.629 20:26:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.196 [2024-11-26 20:26:48.621802] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:55.196 [2024-11-26 20:26:48.621899] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:55.196 [2024-11-26 20:26:48.622045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.764 "name": "raid_bdev1", 00:13:55.764 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:55.764 "strip_size_kb": 0, 00:13:55.764 "state": "online", 00:13:55.764 "raid_level": "raid1", 00:13:55.764 "superblock": true, 00:13:55.764 "num_base_bdevs": 4, 00:13:55.764 "num_base_bdevs_discovered": 3, 00:13:55.764 "num_base_bdevs_operational": 3, 00:13:55.764 "base_bdevs_list": [ 00:13:55.764 { 00:13:55.764 "name": "spare", 00:13:55.764 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:55.764 "is_configured": true, 00:13:55.764 "data_offset": 2048, 00:13:55.764 "data_size": 63488 00:13:55.764 }, 00:13:55.764 { 00:13:55.764 "name": null, 00:13:55.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.764 "is_configured": false, 00:13:55.764 "data_offset": 0, 00:13:55.764 "data_size": 63488 00:13:55.764 }, 00:13:55.764 { 00:13:55.764 "name": "BaseBdev3", 00:13:55.764 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:55.764 "is_configured": true, 00:13:55.764 "data_offset": 2048, 00:13:55.764 "data_size": 63488 00:13:55.764 }, 00:13:55.764 { 00:13:55.764 "name": "BaseBdev4", 00:13:55.764 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:55.764 "is_configured": true, 00:13:55.764 "data_offset": 2048, 00:13:55.764 "data_size": 63488 00:13:55.764 } 00:13:55.764 ] 00:13:55.764 }' 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.764 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.023 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.023 "name": "raid_bdev1", 00:13:56.023 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:56.023 "strip_size_kb": 0, 00:13:56.023 "state": "online", 00:13:56.023 "raid_level": "raid1", 00:13:56.023 "superblock": true, 00:13:56.023 "num_base_bdevs": 4, 00:13:56.023 "num_base_bdevs_discovered": 3, 00:13:56.023 "num_base_bdevs_operational": 3, 00:13:56.023 "base_bdevs_list": [ 00:13:56.023 { 00:13:56.023 "name": "spare", 00:13:56.023 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:56.023 "is_configured": true, 00:13:56.023 "data_offset": 2048, 00:13:56.023 "data_size": 63488 00:13:56.023 }, 00:13:56.023 { 00:13:56.023 "name": null, 00:13:56.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.023 "is_configured": false, 00:13:56.023 "data_offset": 0, 00:13:56.023 "data_size": 63488 00:13:56.023 }, 00:13:56.023 { 00:13:56.023 "name": "BaseBdev3", 00:13:56.023 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:56.023 "is_configured": true, 00:13:56.023 "data_offset": 2048, 00:13:56.023 "data_size": 63488 00:13:56.023 }, 00:13:56.024 { 00:13:56.024 "name": "BaseBdev4", 00:13:56.024 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:56.024 "is_configured": true, 00:13:56.024 "data_offset": 2048, 00:13:56.024 "data_size": 63488 00:13:56.024 } 00:13:56.024 ] 00:13:56.024 }' 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.024 "name": "raid_bdev1", 00:13:56.024 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:56.024 "strip_size_kb": 0, 00:13:56.024 "state": "online", 00:13:56.024 "raid_level": "raid1", 00:13:56.024 "superblock": true, 00:13:56.024 "num_base_bdevs": 4, 00:13:56.024 "num_base_bdevs_discovered": 3, 00:13:56.024 "num_base_bdevs_operational": 3, 00:13:56.024 "base_bdevs_list": [ 00:13:56.024 { 00:13:56.024 "name": "spare", 00:13:56.024 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:56.024 "is_configured": true, 00:13:56.024 "data_offset": 2048, 00:13:56.024 "data_size": 63488 00:13:56.024 }, 00:13:56.024 { 00:13:56.024 "name": null, 00:13:56.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.024 "is_configured": false, 00:13:56.024 "data_offset": 0, 00:13:56.024 "data_size": 63488 00:13:56.024 }, 00:13:56.024 { 00:13:56.024 "name": "BaseBdev3", 00:13:56.024 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:56.024 "is_configured": true, 00:13:56.024 "data_offset": 2048, 00:13:56.024 "data_size": 63488 00:13:56.024 }, 00:13:56.024 { 00:13:56.024 "name": "BaseBdev4", 00:13:56.024 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:56.024 "is_configured": true, 00:13:56.024 "data_offset": 2048, 00:13:56.024 "data_size": 63488 00:13:56.024 } 00:13:56.024 ] 00:13:56.024 }' 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.024 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.591 [2024-11-26 20:26:49.929127] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.591 [2024-11-26 20:26:49.929177] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.591 [2024-11-26 20:26:49.929279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.591 [2024-11-26 20:26:49.929374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.591 [2024-11-26 20:26:49.929394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.591 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:56.592 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.592 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:56.592 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.592 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:56.592 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.592 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.592 20:26:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:56.850 /dev/nbd0 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.850 1+0 records in 00:13:56.850 1+0 records out 00:13:56.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479536 s, 8.5 MB/s 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.850 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:57.108 /dev/nbd1 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.108 1+0 records in 00:13:57.108 1+0 records out 00:13:57.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311219 s, 13.2 MB/s 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.108 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.366 20:26:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.623 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.623 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.623 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.623 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.623 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.624 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.881 [2024-11-26 20:26:51.178651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:57.881 [2024-11-26 20:26:51.178805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.881 [2024-11-26 20:26:51.178855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:57.881 [2024-11-26 20:26:51.178901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.881 [2024-11-26 20:26:51.181462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.881 [2024-11-26 20:26:51.181561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:57.881 [2024-11-26 20:26:51.181722] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:57.881 [2024-11-26 20:26:51.181814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.881 [2024-11-26 20:26:51.181985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.881 [2024-11-26 20:26:51.182130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.881 spare 00:13:57.881 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.881 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:57.881 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.881 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.881 [2024-11-26 20:26:51.282080] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:57.881 [2024-11-26 20:26:51.282245] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:57.881 [2024-11-26 20:26:51.282702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:57.881 [2024-11-26 20:26:51.282961] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:57.881 [2024-11-26 20:26:51.283008] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:57.881 [2024-11-26 20:26:51.283243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.881 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.881 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:57.881 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.882 "name": "raid_bdev1", 00:13:57.882 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:57.882 "strip_size_kb": 0, 00:13:57.882 "state": "online", 00:13:57.882 "raid_level": "raid1", 00:13:57.882 "superblock": true, 00:13:57.882 "num_base_bdevs": 4, 00:13:57.882 "num_base_bdevs_discovered": 3, 00:13:57.882 "num_base_bdevs_operational": 3, 00:13:57.882 "base_bdevs_list": [ 00:13:57.882 { 00:13:57.882 "name": "spare", 00:13:57.882 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:57.882 "is_configured": true, 00:13:57.882 "data_offset": 2048, 00:13:57.882 "data_size": 63488 00:13:57.882 }, 00:13:57.882 { 00:13:57.882 "name": null, 00:13:57.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.882 "is_configured": false, 00:13:57.882 "data_offset": 2048, 00:13:57.882 "data_size": 63488 00:13:57.882 }, 00:13:57.882 { 00:13:57.882 "name": "BaseBdev3", 00:13:57.882 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:57.882 "is_configured": true, 00:13:57.882 "data_offset": 2048, 00:13:57.882 "data_size": 63488 00:13:57.882 }, 00:13:57.882 { 00:13:57.882 "name": "BaseBdev4", 00:13:57.882 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:57.882 "is_configured": true, 00:13:57.882 "data_offset": 2048, 00:13:57.882 "data_size": 63488 00:13:57.882 } 00:13:57.882 ] 00:13:57.882 }' 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.882 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.479 "name": "raid_bdev1", 00:13:58.479 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:58.479 "strip_size_kb": 0, 00:13:58.479 "state": "online", 00:13:58.479 "raid_level": "raid1", 00:13:58.479 "superblock": true, 00:13:58.479 "num_base_bdevs": 4, 00:13:58.479 "num_base_bdevs_discovered": 3, 00:13:58.479 "num_base_bdevs_operational": 3, 00:13:58.479 "base_bdevs_list": [ 00:13:58.479 { 00:13:58.479 "name": "spare", 00:13:58.479 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:13:58.479 "is_configured": true, 00:13:58.479 "data_offset": 2048, 00:13:58.479 "data_size": 63488 00:13:58.479 }, 00:13:58.479 { 00:13:58.479 "name": null, 00:13:58.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.479 "is_configured": false, 00:13:58.479 "data_offset": 2048, 00:13:58.479 "data_size": 63488 00:13:58.479 }, 00:13:58.479 { 00:13:58.479 "name": "BaseBdev3", 00:13:58.479 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:58.479 "is_configured": true, 00:13:58.479 "data_offset": 2048, 00:13:58.479 "data_size": 63488 00:13:58.479 }, 00:13:58.479 { 00:13:58.479 "name": "BaseBdev4", 00:13:58.479 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:58.479 "is_configured": true, 00:13:58.479 "data_offset": 2048, 00:13:58.479 "data_size": 63488 00:13:58.479 } 00:13:58.479 ] 00:13:58.479 }' 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.479 [2024-11-26 20:26:51.994056] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.479 20:26:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.479 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.480 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.480 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.480 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.480 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.480 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.480 20:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.480 20:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.764 20:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.764 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.764 "name": "raid_bdev1", 00:13:58.764 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:13:58.764 "strip_size_kb": 0, 00:13:58.764 "state": "online", 00:13:58.764 "raid_level": "raid1", 00:13:58.764 "superblock": true, 00:13:58.764 "num_base_bdevs": 4, 00:13:58.764 "num_base_bdevs_discovered": 2, 00:13:58.764 "num_base_bdevs_operational": 2, 00:13:58.764 "base_bdevs_list": [ 00:13:58.764 { 00:13:58.764 "name": null, 00:13:58.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.764 "is_configured": false, 00:13:58.764 "data_offset": 0, 00:13:58.764 "data_size": 63488 00:13:58.764 }, 00:13:58.764 { 00:13:58.764 "name": null, 00:13:58.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.764 "is_configured": false, 00:13:58.764 "data_offset": 2048, 00:13:58.764 "data_size": 63488 00:13:58.764 }, 00:13:58.764 { 00:13:58.764 "name": "BaseBdev3", 00:13:58.764 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:13:58.764 "is_configured": true, 00:13:58.764 "data_offset": 2048, 00:13:58.764 "data_size": 63488 00:13:58.764 }, 00:13:58.764 { 00:13:58.764 "name": "BaseBdev4", 00:13:58.764 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:13:58.764 "is_configured": true, 00:13:58.764 "data_offset": 2048, 00:13:58.764 "data_size": 63488 00:13:58.764 } 00:13:58.764 ] 00:13:58.764 }' 00:13:58.764 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.764 20:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.022 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.022 20:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.022 20:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.022 [2024-11-26 20:26:52.461336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.022 [2024-11-26 20:26:52.461538] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:59.022 [2024-11-26 20:26:52.461560] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:59.022 [2024-11-26 20:26:52.461611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.022 [2024-11-26 20:26:52.465088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:59.022 [2024-11-26 20:26:52.467346] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.022 20:26:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.022 20:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.957 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.217 "name": "raid_bdev1", 00:14:00.217 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:00.217 "strip_size_kb": 0, 00:14:00.217 "state": "online", 00:14:00.217 "raid_level": "raid1", 00:14:00.217 "superblock": true, 00:14:00.217 "num_base_bdevs": 4, 00:14:00.217 "num_base_bdevs_discovered": 3, 00:14:00.217 "num_base_bdevs_operational": 3, 00:14:00.217 "process": { 00:14:00.217 "type": "rebuild", 00:14:00.217 "target": "spare", 00:14:00.217 "progress": { 00:14:00.217 "blocks": 20480, 00:14:00.217 "percent": 32 00:14:00.217 } 00:14:00.217 }, 00:14:00.217 "base_bdevs_list": [ 00:14:00.217 { 00:14:00.217 "name": "spare", 00:14:00.217 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:14:00.217 "is_configured": true, 00:14:00.217 "data_offset": 2048, 00:14:00.217 "data_size": 63488 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": null, 00:14:00.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.217 "is_configured": false, 00:14:00.217 "data_offset": 2048, 00:14:00.217 "data_size": 63488 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": "BaseBdev3", 00:14:00.217 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:00.217 "is_configured": true, 00:14:00.217 "data_offset": 2048, 00:14:00.217 "data_size": 63488 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": "BaseBdev4", 00:14:00.217 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:00.217 "is_configured": true, 00:14:00.217 "data_offset": 2048, 00:14:00.217 "data_size": 63488 00:14:00.217 } 00:14:00.217 ] 00:14:00.217 }' 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 [2024-11-26 20:26:53.630453] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.217 [2024-11-26 20:26:53.674313] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.217 [2024-11-26 20:26:53.674504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.217 [2024-11-26 20:26:53.674566] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.217 [2024-11-26 20:26:53.674594] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.217 "name": "raid_bdev1", 00:14:00.217 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:00.217 "strip_size_kb": 0, 00:14:00.217 "state": "online", 00:14:00.217 "raid_level": "raid1", 00:14:00.217 "superblock": true, 00:14:00.217 "num_base_bdevs": 4, 00:14:00.217 "num_base_bdevs_discovered": 2, 00:14:00.217 "num_base_bdevs_operational": 2, 00:14:00.217 "base_bdevs_list": [ 00:14:00.217 { 00:14:00.217 "name": null, 00:14:00.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.217 "is_configured": false, 00:14:00.217 "data_offset": 0, 00:14:00.217 "data_size": 63488 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": null, 00:14:00.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.217 "is_configured": false, 00:14:00.217 "data_offset": 2048, 00:14:00.217 "data_size": 63488 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": "BaseBdev3", 00:14:00.217 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:00.217 "is_configured": true, 00:14:00.217 "data_offset": 2048, 00:14:00.217 "data_size": 63488 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": "BaseBdev4", 00:14:00.217 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:00.217 "is_configured": true, 00:14:00.217 "data_offset": 2048, 00:14:00.217 "data_size": 63488 00:14:00.217 } 00:14:00.217 ] 00:14:00.217 }' 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.217 20:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 20:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:00.784 20:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.784 20:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 [2024-11-26 20:26:54.102932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:00.784 [2024-11-26 20:26:54.103059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.784 [2024-11-26 20:26:54.103094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:00.784 [2024-11-26 20:26:54.103107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.784 [2024-11-26 20:26:54.103689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.784 [2024-11-26 20:26:54.103715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:00.784 [2024-11-26 20:26:54.103816] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:00.784 [2024-11-26 20:26:54.103839] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:00.784 [2024-11-26 20:26:54.103851] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:00.784 [2024-11-26 20:26:54.103880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.784 spare 00:14:00.784 [2024-11-26 20:26:54.107347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:00.784 20:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.784 20:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:00.784 [2024-11-26 20:26:54.109578] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.720 "name": "raid_bdev1", 00:14:01.720 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:01.720 "strip_size_kb": 0, 00:14:01.720 "state": "online", 00:14:01.720 "raid_level": "raid1", 00:14:01.720 "superblock": true, 00:14:01.720 "num_base_bdevs": 4, 00:14:01.720 "num_base_bdevs_discovered": 3, 00:14:01.720 "num_base_bdevs_operational": 3, 00:14:01.720 "process": { 00:14:01.720 "type": "rebuild", 00:14:01.720 "target": "spare", 00:14:01.720 "progress": { 00:14:01.720 "blocks": 20480, 00:14:01.720 "percent": 32 00:14:01.720 } 00:14:01.720 }, 00:14:01.720 "base_bdevs_list": [ 00:14:01.720 { 00:14:01.720 "name": "spare", 00:14:01.720 "uuid": "6507ddfe-a18b-54c1-a16c-8a29f690f149", 00:14:01.720 "is_configured": true, 00:14:01.720 "data_offset": 2048, 00:14:01.720 "data_size": 63488 00:14:01.720 }, 00:14:01.720 { 00:14:01.720 "name": null, 00:14:01.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.720 "is_configured": false, 00:14:01.720 "data_offset": 2048, 00:14:01.720 "data_size": 63488 00:14:01.720 }, 00:14:01.720 { 00:14:01.720 "name": "BaseBdev3", 00:14:01.720 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:01.720 "is_configured": true, 00:14:01.720 "data_offset": 2048, 00:14:01.720 "data_size": 63488 00:14:01.720 }, 00:14:01.720 { 00:14:01.720 "name": "BaseBdev4", 00:14:01.720 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:01.720 "is_configured": true, 00:14:01.720 "data_offset": 2048, 00:14:01.720 "data_size": 63488 00:14:01.720 } 00:14:01.720 ] 00:14:01.720 }' 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.720 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.990 [2024-11-26 20:26:55.278738] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.990 [2024-11-26 20:26:55.316351] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.990 [2024-11-26 20:26:55.316538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.990 [2024-11-26 20:26:55.316584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.990 [2024-11-26 20:26:55.316610] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.990 "name": "raid_bdev1", 00:14:01.990 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:01.990 "strip_size_kb": 0, 00:14:01.990 "state": "online", 00:14:01.990 "raid_level": "raid1", 00:14:01.990 "superblock": true, 00:14:01.990 "num_base_bdevs": 4, 00:14:01.990 "num_base_bdevs_discovered": 2, 00:14:01.990 "num_base_bdevs_operational": 2, 00:14:01.990 "base_bdevs_list": [ 00:14:01.990 { 00:14:01.990 "name": null, 00:14:01.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.990 "is_configured": false, 00:14:01.990 "data_offset": 0, 00:14:01.990 "data_size": 63488 00:14:01.990 }, 00:14:01.990 { 00:14:01.990 "name": null, 00:14:01.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.990 "is_configured": false, 00:14:01.990 "data_offset": 2048, 00:14:01.990 "data_size": 63488 00:14:01.990 }, 00:14:01.990 { 00:14:01.990 "name": "BaseBdev3", 00:14:01.990 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:01.990 "is_configured": true, 00:14:01.990 "data_offset": 2048, 00:14:01.990 "data_size": 63488 00:14:01.990 }, 00:14:01.990 { 00:14:01.990 "name": "BaseBdev4", 00:14:01.990 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:01.990 "is_configured": true, 00:14:01.990 "data_offset": 2048, 00:14:01.990 "data_size": 63488 00:14:01.990 } 00:14:01.990 ] 00:14:01.990 }' 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.990 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.284 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.543 "name": "raid_bdev1", 00:14:02.543 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:02.543 "strip_size_kb": 0, 00:14:02.543 "state": "online", 00:14:02.543 "raid_level": "raid1", 00:14:02.543 "superblock": true, 00:14:02.543 "num_base_bdevs": 4, 00:14:02.543 "num_base_bdevs_discovered": 2, 00:14:02.543 "num_base_bdevs_operational": 2, 00:14:02.543 "base_bdevs_list": [ 00:14:02.543 { 00:14:02.543 "name": null, 00:14:02.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.543 "is_configured": false, 00:14:02.543 "data_offset": 0, 00:14:02.543 "data_size": 63488 00:14:02.543 }, 00:14:02.543 { 00:14:02.543 "name": null, 00:14:02.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.543 "is_configured": false, 00:14:02.543 "data_offset": 2048, 00:14:02.543 "data_size": 63488 00:14:02.543 }, 00:14:02.543 { 00:14:02.543 "name": "BaseBdev3", 00:14:02.543 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:02.543 "is_configured": true, 00:14:02.543 "data_offset": 2048, 00:14:02.543 "data_size": 63488 00:14:02.543 }, 00:14:02.543 { 00:14:02.543 "name": "BaseBdev4", 00:14:02.543 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:02.543 "is_configured": true, 00:14:02.543 "data_offset": 2048, 00:14:02.543 "data_size": 63488 00:14:02.543 } 00:14:02.543 ] 00:14:02.543 }' 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:02.543 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.544 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.544 [2024-11-26 20:26:55.965028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:02.544 [2024-11-26 20:26:55.965159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.544 [2024-11-26 20:26:55.965190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:02.544 [2024-11-26 20:26:55.965200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.544 [2024-11-26 20:26:55.965705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.544 [2024-11-26 20:26:55.965727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:02.544 [2024-11-26 20:26:55.965812] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:02.544 [2024-11-26 20:26:55.965828] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:02.544 [2024-11-26 20:26:55.965838] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:02.544 [2024-11-26 20:26:55.965849] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:02.544 BaseBdev1 00:14:02.544 20:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.544 20:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.480 20:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.480 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.480 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.480 "name": "raid_bdev1", 00:14:03.480 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:03.480 "strip_size_kb": 0, 00:14:03.480 "state": "online", 00:14:03.481 "raid_level": "raid1", 00:14:03.481 "superblock": true, 00:14:03.481 "num_base_bdevs": 4, 00:14:03.481 "num_base_bdevs_discovered": 2, 00:14:03.481 "num_base_bdevs_operational": 2, 00:14:03.481 "base_bdevs_list": [ 00:14:03.481 { 00:14:03.481 "name": null, 00:14:03.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.481 "is_configured": false, 00:14:03.481 "data_offset": 0, 00:14:03.481 "data_size": 63488 00:14:03.481 }, 00:14:03.481 { 00:14:03.481 "name": null, 00:14:03.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.481 "is_configured": false, 00:14:03.481 "data_offset": 2048, 00:14:03.481 "data_size": 63488 00:14:03.481 }, 00:14:03.481 { 00:14:03.481 "name": "BaseBdev3", 00:14:03.481 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:03.481 "is_configured": true, 00:14:03.481 "data_offset": 2048, 00:14:03.481 "data_size": 63488 00:14:03.481 }, 00:14:03.481 { 00:14:03.481 "name": "BaseBdev4", 00:14:03.481 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:03.481 "is_configured": true, 00:14:03.481 "data_offset": 2048, 00:14:03.481 "data_size": 63488 00:14:03.481 } 00:14:03.481 ] 00:14:03.481 }' 00:14:03.481 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.481 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.049 "name": "raid_bdev1", 00:14:04.049 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:04.049 "strip_size_kb": 0, 00:14:04.049 "state": "online", 00:14:04.049 "raid_level": "raid1", 00:14:04.049 "superblock": true, 00:14:04.049 "num_base_bdevs": 4, 00:14:04.049 "num_base_bdevs_discovered": 2, 00:14:04.049 "num_base_bdevs_operational": 2, 00:14:04.049 "base_bdevs_list": [ 00:14:04.049 { 00:14:04.049 "name": null, 00:14:04.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.049 "is_configured": false, 00:14:04.049 "data_offset": 0, 00:14:04.049 "data_size": 63488 00:14:04.049 }, 00:14:04.049 { 00:14:04.049 "name": null, 00:14:04.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.049 "is_configured": false, 00:14:04.049 "data_offset": 2048, 00:14:04.049 "data_size": 63488 00:14:04.049 }, 00:14:04.049 { 00:14:04.049 "name": "BaseBdev3", 00:14:04.049 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:04.049 "is_configured": true, 00:14:04.049 "data_offset": 2048, 00:14:04.049 "data_size": 63488 00:14:04.049 }, 00:14:04.049 { 00:14:04.049 "name": "BaseBdev4", 00:14:04.049 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:04.049 "is_configured": true, 00:14:04.049 "data_offset": 2048, 00:14:04.049 "data_size": 63488 00:14:04.049 } 00:14:04.049 ] 00:14:04.049 }' 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.049 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.309 [2024-11-26 20:26:57.614438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.309 [2024-11-26 20:26:57.614672] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:04.309 [2024-11-26 20:26:57.614699] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:04.309 request: 00:14:04.309 { 00:14:04.309 "base_bdev": "BaseBdev1", 00:14:04.309 "raid_bdev": "raid_bdev1", 00:14:04.309 "method": "bdev_raid_add_base_bdev", 00:14:04.309 "req_id": 1 00:14:04.309 } 00:14:04.309 Got JSON-RPC error response 00:14:04.309 response: 00:14:04.309 { 00:14:04.309 "code": -22, 00:14:04.309 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:04.309 } 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.309 20:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:05.247 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.247 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.247 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.247 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.247 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.247 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.248 "name": "raid_bdev1", 00:14:05.248 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:05.248 "strip_size_kb": 0, 00:14:05.248 "state": "online", 00:14:05.248 "raid_level": "raid1", 00:14:05.248 "superblock": true, 00:14:05.248 "num_base_bdevs": 4, 00:14:05.248 "num_base_bdevs_discovered": 2, 00:14:05.248 "num_base_bdevs_operational": 2, 00:14:05.248 "base_bdevs_list": [ 00:14:05.248 { 00:14:05.248 "name": null, 00:14:05.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.248 "is_configured": false, 00:14:05.248 "data_offset": 0, 00:14:05.248 "data_size": 63488 00:14:05.248 }, 00:14:05.248 { 00:14:05.248 "name": null, 00:14:05.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.248 "is_configured": false, 00:14:05.248 "data_offset": 2048, 00:14:05.248 "data_size": 63488 00:14:05.248 }, 00:14:05.248 { 00:14:05.248 "name": "BaseBdev3", 00:14:05.248 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:05.248 "is_configured": true, 00:14:05.248 "data_offset": 2048, 00:14:05.248 "data_size": 63488 00:14:05.248 }, 00:14:05.248 { 00:14:05.248 "name": "BaseBdev4", 00:14:05.248 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:05.248 "is_configured": true, 00:14:05.248 "data_offset": 2048, 00:14:05.248 "data_size": 63488 00:14:05.248 } 00:14:05.248 ] 00:14:05.248 }' 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.248 20:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.816 "name": "raid_bdev1", 00:14:05.816 "uuid": "afc4b320-b4ce-4641-aee7-bb420b5073a2", 00:14:05.816 "strip_size_kb": 0, 00:14:05.816 "state": "online", 00:14:05.816 "raid_level": "raid1", 00:14:05.816 "superblock": true, 00:14:05.816 "num_base_bdevs": 4, 00:14:05.816 "num_base_bdevs_discovered": 2, 00:14:05.816 "num_base_bdevs_operational": 2, 00:14:05.816 "base_bdevs_list": [ 00:14:05.816 { 00:14:05.816 "name": null, 00:14:05.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.816 "is_configured": false, 00:14:05.816 "data_offset": 0, 00:14:05.816 "data_size": 63488 00:14:05.816 }, 00:14:05.816 { 00:14:05.816 "name": null, 00:14:05.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.816 "is_configured": false, 00:14:05.816 "data_offset": 2048, 00:14:05.816 "data_size": 63488 00:14:05.816 }, 00:14:05.816 { 00:14:05.816 "name": "BaseBdev3", 00:14:05.816 "uuid": "5522f3fe-bce1-5320-80b9-42748d1aaa0c", 00:14:05.816 "is_configured": true, 00:14:05.816 "data_offset": 2048, 00:14:05.816 "data_size": 63488 00:14:05.816 }, 00:14:05.816 { 00:14:05.816 "name": "BaseBdev4", 00:14:05.816 "uuid": "961d751e-90fe-559e-92a7-7fe8668ca51a", 00:14:05.816 "is_configured": true, 00:14:05.816 "data_offset": 2048, 00:14:05.816 "data_size": 63488 00:14:05.816 } 00:14:05.816 ] 00:14:05.816 }' 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 89146 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 89146 ']' 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 89146 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89146 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89146' 00:14:05.816 killing process with pid 89146 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 89146 00:14:05.816 Received shutdown signal, test time was about 60.000000 seconds 00:14:05.816 00:14:05.816 Latency(us) 00:14:05.816 [2024-11-26T20:26:59.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.816 [2024-11-26T20:26:59.368Z] =================================================================================================================== 00:14:05.816 [2024-11-26T20:26:59.368Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:05.816 [2024-11-26 20:26:59.281720] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.816 [2024-11-26 20:26:59.281859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.816 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 89146 00:14:05.816 [2024-11-26 20:26:59.281932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.816 [2024-11-26 20:26:59.281946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:05.816 [2024-11-26 20:26:59.362475] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:06.417 00:14:06.417 real 0m24.854s 00:14:06.417 user 0m30.406s 00:14:06.417 sys 0m4.069s 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.417 ************************************ 00:14:06.417 END TEST raid_rebuild_test_sb 00:14:06.417 ************************************ 00:14:06.417 20:26:59 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:06.417 20:26:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:06.417 20:26:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.417 20:26:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.417 ************************************ 00:14:06.417 START TEST raid_rebuild_test_io 00:14:06.417 ************************************ 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:06.417 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89898 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89898 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89898 ']' 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.418 20:26:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.418 [2024-11-26 20:26:59.901073] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:06.418 [2024-11-26 20:26:59.901300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89898 ] 00:14:06.418 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:06.418 Zero copy mechanism will not be used. 00:14:06.676 [2024-11-26 20:27:00.069720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.676 [2024-11-26 20:27:00.156457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.935 [2024-11-26 20:27:00.232781] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.935 [2024-11-26 20:27:00.232917] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.504 BaseBdev1_malloc 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.504 [2024-11-26 20:27:00.800946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.504 [2024-11-26 20:27:00.801115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.504 [2024-11-26 20:27:00.801214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:07.504 [2024-11-26 20:27:00.801288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.504 [2024-11-26 20:27:00.803976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.504 [2024-11-26 20:27:00.804075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.504 BaseBdev1 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.504 BaseBdev2_malloc 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.504 [2024-11-26 20:27:00.846388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:07.504 [2024-11-26 20:27:00.846523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.504 [2024-11-26 20:27:00.846592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:07.504 [2024-11-26 20:27:00.846658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.504 [2024-11-26 20:27:00.849281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.504 [2024-11-26 20:27:00.849381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:07.504 BaseBdev2 00:14:07.504 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 BaseBdev3_malloc 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 [2024-11-26 20:27:00.881990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:07.505 [2024-11-26 20:27:00.882052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.505 [2024-11-26 20:27:00.882081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:07.505 [2024-11-26 20:27:00.882090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.505 [2024-11-26 20:27:00.884587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.505 [2024-11-26 20:27:00.884711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:07.505 BaseBdev3 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 BaseBdev4_malloc 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 [2024-11-26 20:27:00.913303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:07.505 [2024-11-26 20:27:00.913381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.505 [2024-11-26 20:27:00.913410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:07.505 [2024-11-26 20:27:00.913420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.505 [2024-11-26 20:27:00.915836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.505 [2024-11-26 20:27:00.915878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:07.505 BaseBdev4 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 spare_malloc 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 spare_delay 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 [2024-11-26 20:27:00.955465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.505 [2024-11-26 20:27:00.955528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.505 [2024-11-26 20:27:00.955569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:07.505 [2024-11-26 20:27:00.955579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.505 [2024-11-26 20:27:00.957930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.505 [2024-11-26 20:27:00.958035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.505 spare 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 [2024-11-26 20:27:00.967537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.505 [2024-11-26 20:27:00.969469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.505 [2024-11-26 20:27:00.969608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.505 [2024-11-26 20:27:00.969679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:07.505 [2024-11-26 20:27:00.969771] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:07.505 [2024-11-26 20:27:00.969783] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:07.505 [2024-11-26 20:27:00.970057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:07.505 [2024-11-26 20:27:00.970218] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:07.505 [2024-11-26 20:27:00.970233] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:07.505 [2024-11-26 20:27:00.970384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.505 20:27:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.505 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.505 "name": "raid_bdev1", 00:14:07.505 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:07.505 "strip_size_kb": 0, 00:14:07.505 "state": "online", 00:14:07.505 "raid_level": "raid1", 00:14:07.505 "superblock": false, 00:14:07.505 "num_base_bdevs": 4, 00:14:07.505 "num_base_bdevs_discovered": 4, 00:14:07.505 "num_base_bdevs_operational": 4, 00:14:07.505 "base_bdevs_list": [ 00:14:07.505 { 00:14:07.505 "name": "BaseBdev1", 00:14:07.505 "uuid": "1cc763c6-87fa-57b3-8b44-a6cdbd8c2202", 00:14:07.505 "is_configured": true, 00:14:07.505 "data_offset": 0, 00:14:07.505 "data_size": 65536 00:14:07.505 }, 00:14:07.505 { 00:14:07.505 "name": "BaseBdev2", 00:14:07.505 "uuid": "d8c850c4-e3e0-58a5-a1aa-1d063a54ff88", 00:14:07.505 "is_configured": true, 00:14:07.505 "data_offset": 0, 00:14:07.505 "data_size": 65536 00:14:07.505 }, 00:14:07.505 { 00:14:07.505 "name": "BaseBdev3", 00:14:07.505 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:07.505 "is_configured": true, 00:14:07.505 "data_offset": 0, 00:14:07.505 "data_size": 65536 00:14:07.505 }, 00:14:07.505 { 00:14:07.505 "name": "BaseBdev4", 00:14:07.505 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:07.505 "is_configured": true, 00:14:07.505 "data_offset": 0, 00:14:07.505 "data_size": 65536 00:14:07.505 } 00:14:07.505 ] 00:14:07.505 }' 00:14:07.505 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.505 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.076 [2024-11-26 20:27:01.439106] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.076 [2024-11-26 20:27:01.538578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.076 "name": "raid_bdev1", 00:14:08.076 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:08.076 "strip_size_kb": 0, 00:14:08.076 "state": "online", 00:14:08.076 "raid_level": "raid1", 00:14:08.076 "superblock": false, 00:14:08.076 "num_base_bdevs": 4, 00:14:08.076 "num_base_bdevs_discovered": 3, 00:14:08.076 "num_base_bdevs_operational": 3, 00:14:08.076 "base_bdevs_list": [ 00:14:08.076 { 00:14:08.076 "name": null, 00:14:08.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.076 "is_configured": false, 00:14:08.076 "data_offset": 0, 00:14:08.076 "data_size": 65536 00:14:08.076 }, 00:14:08.076 { 00:14:08.076 "name": "BaseBdev2", 00:14:08.076 "uuid": "d8c850c4-e3e0-58a5-a1aa-1d063a54ff88", 00:14:08.076 "is_configured": true, 00:14:08.076 "data_offset": 0, 00:14:08.076 "data_size": 65536 00:14:08.076 }, 00:14:08.076 { 00:14:08.076 "name": "BaseBdev3", 00:14:08.076 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:08.076 "is_configured": true, 00:14:08.076 "data_offset": 0, 00:14:08.076 "data_size": 65536 00:14:08.076 }, 00:14:08.076 { 00:14:08.076 "name": "BaseBdev4", 00:14:08.076 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:08.076 "is_configured": true, 00:14:08.076 "data_offset": 0, 00:14:08.076 "data_size": 65536 00:14:08.076 } 00:14:08.076 ] 00:14:08.076 }' 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.076 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.336 [2024-11-26 20:27:01.640482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:08.336 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:08.336 Zero copy mechanism will not be used. 00:14:08.336 Running I/O for 60 seconds... 00:14:08.596 20:27:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.596 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.596 20:27:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.596 [2024-11-26 20:27:01.987129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.596 20:27:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.596 20:27:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:08.596 [2024-11-26 20:27:02.046561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:08.596 [2024-11-26 20:27:02.048740] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.856 [2024-11-26 20:27:02.183259] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:08.856 [2024-11-26 20:27:02.185203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:08.856 [2024-11-26 20:27:02.388035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:08.856 [2024-11-26 20:27:02.388495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:09.115 [2024-11-26 20:27:02.636938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:09.115 [2024-11-26 20:27:02.638709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:09.375 122.00 IOPS, 366.00 MiB/s [2024-11-26T20:27:02.927Z] [2024-11-26 20:27:02.871703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.635 "name": "raid_bdev1", 00:14:09.635 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:09.635 "strip_size_kb": 0, 00:14:09.635 "state": "online", 00:14:09.635 "raid_level": "raid1", 00:14:09.635 "superblock": false, 00:14:09.635 "num_base_bdevs": 4, 00:14:09.635 "num_base_bdevs_discovered": 4, 00:14:09.635 "num_base_bdevs_operational": 4, 00:14:09.635 "process": { 00:14:09.635 "type": "rebuild", 00:14:09.635 "target": "spare", 00:14:09.635 "progress": { 00:14:09.635 "blocks": 10240, 00:14:09.635 "percent": 15 00:14:09.635 } 00:14:09.635 }, 00:14:09.635 "base_bdevs_list": [ 00:14:09.635 { 00:14:09.635 "name": "spare", 00:14:09.635 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:09.635 "is_configured": true, 00:14:09.635 "data_offset": 0, 00:14:09.635 "data_size": 65536 00:14:09.635 }, 00:14:09.635 { 00:14:09.635 "name": "BaseBdev2", 00:14:09.635 "uuid": "d8c850c4-e3e0-58a5-a1aa-1d063a54ff88", 00:14:09.635 "is_configured": true, 00:14:09.635 "data_offset": 0, 00:14:09.635 "data_size": 65536 00:14:09.635 }, 00:14:09.635 { 00:14:09.635 "name": "BaseBdev3", 00:14:09.635 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:09.635 "is_configured": true, 00:14:09.635 "data_offset": 0, 00:14:09.635 "data_size": 65536 00:14:09.635 }, 00:14:09.635 { 00:14:09.635 "name": "BaseBdev4", 00:14:09.635 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:09.635 "is_configured": true, 00:14:09.635 "data_offset": 0, 00:14:09.635 "data_size": 65536 00:14:09.635 } 00:14:09.635 ] 00:14:09.635 }' 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.635 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.903 [2024-11-26 20:27:03.197601] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.903 [2024-11-26 20:27:03.224401] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.903 [2024-11-26 20:27:03.236384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.903 [2024-11-26 20:27:03.236441] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.903 [2024-11-26 20:27:03.236457] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:09.903 [2024-11-26 20:27:03.251253] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.903 "name": "raid_bdev1", 00:14:09.903 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:09.903 "strip_size_kb": 0, 00:14:09.903 "state": "online", 00:14:09.903 "raid_level": "raid1", 00:14:09.903 "superblock": false, 00:14:09.903 "num_base_bdevs": 4, 00:14:09.903 "num_base_bdevs_discovered": 3, 00:14:09.903 "num_base_bdevs_operational": 3, 00:14:09.903 "base_bdevs_list": [ 00:14:09.903 { 00:14:09.903 "name": null, 00:14:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.903 "is_configured": false, 00:14:09.903 "data_offset": 0, 00:14:09.903 "data_size": 65536 00:14:09.903 }, 00:14:09.903 { 00:14:09.903 "name": "BaseBdev2", 00:14:09.903 "uuid": "d8c850c4-e3e0-58a5-a1aa-1d063a54ff88", 00:14:09.903 "is_configured": true, 00:14:09.903 "data_offset": 0, 00:14:09.903 "data_size": 65536 00:14:09.903 }, 00:14:09.903 { 00:14:09.903 "name": "BaseBdev3", 00:14:09.903 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:09.903 "is_configured": true, 00:14:09.903 "data_offset": 0, 00:14:09.903 "data_size": 65536 00:14:09.903 }, 00:14:09.903 { 00:14:09.903 "name": "BaseBdev4", 00:14:09.903 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:09.903 "is_configured": true, 00:14:09.903 "data_offset": 0, 00:14:09.903 "data_size": 65536 00:14:09.903 } 00:14:09.903 ] 00:14:09.903 }' 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.903 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.163 118.00 IOPS, 354.00 MiB/s [2024-11-26T20:27:03.715Z] 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.163 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.163 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.163 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.163 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.163 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.163 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.163 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.163 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.507 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.507 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.507 "name": "raid_bdev1", 00:14:10.507 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:10.507 "strip_size_kb": 0, 00:14:10.507 "state": "online", 00:14:10.507 "raid_level": "raid1", 00:14:10.507 "superblock": false, 00:14:10.507 "num_base_bdevs": 4, 00:14:10.507 "num_base_bdevs_discovered": 3, 00:14:10.507 "num_base_bdevs_operational": 3, 00:14:10.507 "base_bdevs_list": [ 00:14:10.507 { 00:14:10.508 "name": null, 00:14:10.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.508 "is_configured": false, 00:14:10.508 "data_offset": 0, 00:14:10.508 "data_size": 65536 00:14:10.508 }, 00:14:10.508 { 00:14:10.508 "name": "BaseBdev2", 00:14:10.508 "uuid": "d8c850c4-e3e0-58a5-a1aa-1d063a54ff88", 00:14:10.508 "is_configured": true, 00:14:10.508 "data_offset": 0, 00:14:10.508 "data_size": 65536 00:14:10.508 }, 00:14:10.508 { 00:14:10.508 "name": "BaseBdev3", 00:14:10.508 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:10.508 "is_configured": true, 00:14:10.508 "data_offset": 0, 00:14:10.508 "data_size": 65536 00:14:10.508 }, 00:14:10.508 { 00:14:10.508 "name": "BaseBdev4", 00:14:10.508 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:10.508 "is_configured": true, 00:14:10.508 "data_offset": 0, 00:14:10.508 "data_size": 65536 00:14:10.508 } 00:14:10.508 ] 00:14:10.508 }' 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.508 [2024-11-26 20:27:03.838471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.508 20:27:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:10.508 [2024-11-26 20:27:03.888357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:10.508 [2024-11-26 20:27:03.890604] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.508 [2024-11-26 20:27:03.993781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:10.508 [2024-11-26 20:27:03.995971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:10.768 [2024-11-26 20:27:04.238553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:10.768 [2024-11-26 20:27:04.239730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:11.029 [2024-11-26 20:27:04.577931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:11.289 116.67 IOPS, 350.00 MiB/s [2024-11-26T20:27:04.841Z] [2024-11-26 20:27:04.796442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.547 "name": "raid_bdev1", 00:14:11.547 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:11.547 "strip_size_kb": 0, 00:14:11.547 "state": "online", 00:14:11.547 "raid_level": "raid1", 00:14:11.547 "superblock": false, 00:14:11.547 "num_base_bdevs": 4, 00:14:11.547 "num_base_bdevs_discovered": 4, 00:14:11.547 "num_base_bdevs_operational": 4, 00:14:11.547 "process": { 00:14:11.547 "type": "rebuild", 00:14:11.547 "target": "spare", 00:14:11.547 "progress": { 00:14:11.547 "blocks": 10240, 00:14:11.547 "percent": 15 00:14:11.547 } 00:14:11.547 }, 00:14:11.547 "base_bdevs_list": [ 00:14:11.547 { 00:14:11.547 "name": "spare", 00:14:11.547 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:11.547 "is_configured": true, 00:14:11.547 "data_offset": 0, 00:14:11.547 "data_size": 65536 00:14:11.547 }, 00:14:11.547 { 00:14:11.547 "name": "BaseBdev2", 00:14:11.547 "uuid": "d8c850c4-e3e0-58a5-a1aa-1d063a54ff88", 00:14:11.547 "is_configured": true, 00:14:11.547 "data_offset": 0, 00:14:11.547 "data_size": 65536 00:14:11.547 }, 00:14:11.547 { 00:14:11.547 "name": "BaseBdev3", 00:14:11.547 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:11.547 "is_configured": true, 00:14:11.547 "data_offset": 0, 00:14:11.547 "data_size": 65536 00:14:11.547 }, 00:14:11.547 { 00:14:11.547 "name": "BaseBdev4", 00:14:11.547 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:11.547 "is_configured": true, 00:14:11.547 "data_offset": 0, 00:14:11.547 "data_size": 65536 00:14:11.547 } 00:14:11.547 ] 00:14:11.547 }' 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.547 20:27:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.547 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.547 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:11.547 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:11.547 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:11.547 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:11.547 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:11.547 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.547 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.547 [2024-11-26 20:27:05.028359] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:11.807 [2024-11-26 20:27:05.158041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:11.807 [2024-11-26 20:27:05.158677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:11.807 [2024-11-26 20:27:05.268327] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:14:11.807 [2024-11-26 20:27:05.268367] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:11.807 [2024-11-26 20:27:05.268428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.807 "name": "raid_bdev1", 00:14:11.807 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:11.807 "strip_size_kb": 0, 00:14:11.807 "state": "online", 00:14:11.807 "raid_level": "raid1", 00:14:11.807 "superblock": false, 00:14:11.807 "num_base_bdevs": 4, 00:14:11.807 "num_base_bdevs_discovered": 3, 00:14:11.807 "num_base_bdevs_operational": 3, 00:14:11.807 "process": { 00:14:11.807 "type": "rebuild", 00:14:11.807 "target": "spare", 00:14:11.807 "progress": { 00:14:11.807 "blocks": 14336, 00:14:11.807 "percent": 21 00:14:11.807 } 00:14:11.807 }, 00:14:11.807 "base_bdevs_list": [ 00:14:11.807 { 00:14:11.807 "name": "spare", 00:14:11.807 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:11.807 "is_configured": true, 00:14:11.807 "data_offset": 0, 00:14:11.807 "data_size": 65536 00:14:11.807 }, 00:14:11.807 { 00:14:11.807 "name": null, 00:14:11.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.807 "is_configured": false, 00:14:11.807 "data_offset": 0, 00:14:11.807 "data_size": 65536 00:14:11.807 }, 00:14:11.807 { 00:14:11.807 "name": "BaseBdev3", 00:14:11.807 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:11.807 "is_configured": true, 00:14:11.807 "data_offset": 0, 00:14:11.807 "data_size": 65536 00:14:11.807 }, 00:14:11.807 { 00:14:11.807 "name": "BaseBdev4", 00:14:11.807 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:11.807 "is_configured": true, 00:14:11.807 "data_offset": 0, 00:14:11.807 "data_size": 65536 00:14:11.807 } 00:14:11.807 ] 00:14:11.807 }' 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.807 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.066 [2024-11-26 20:27:05.406189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:12.066 [2024-11-26 20:27:05.406927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.066 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.066 "name": "raid_bdev1", 00:14:12.066 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:12.066 "strip_size_kb": 0, 00:14:12.066 "state": "online", 00:14:12.066 "raid_level": "raid1", 00:14:12.066 "superblock": false, 00:14:12.066 "num_base_bdevs": 4, 00:14:12.066 "num_base_bdevs_discovered": 3, 00:14:12.066 "num_base_bdevs_operational": 3, 00:14:12.066 "process": { 00:14:12.066 "type": "rebuild", 00:14:12.066 "target": "spare", 00:14:12.066 "progress": { 00:14:12.066 "blocks": 16384, 00:14:12.066 "percent": 25 00:14:12.066 } 00:14:12.066 }, 00:14:12.066 "base_bdevs_list": [ 00:14:12.067 { 00:14:12.067 "name": "spare", 00:14:12.067 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:12.067 "is_configured": true, 00:14:12.067 "data_offset": 0, 00:14:12.067 "data_size": 65536 00:14:12.067 }, 00:14:12.067 { 00:14:12.067 "name": null, 00:14:12.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.067 "is_configured": false, 00:14:12.067 "data_offset": 0, 00:14:12.067 "data_size": 65536 00:14:12.067 }, 00:14:12.067 { 00:14:12.067 "name": "BaseBdev3", 00:14:12.067 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:12.067 "is_configured": true, 00:14:12.067 "data_offset": 0, 00:14:12.067 "data_size": 65536 00:14:12.067 }, 00:14:12.067 { 00:14:12.067 "name": "BaseBdev4", 00:14:12.067 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:12.067 "is_configured": true, 00:14:12.067 "data_offset": 0, 00:14:12.067 "data_size": 65536 00:14:12.067 } 00:14:12.067 ] 00:14:12.067 }' 00:14:12.067 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.067 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.067 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.067 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.067 20:27:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.326 106.00 IOPS, 318.00 MiB/s [2024-11-26T20:27:05.878Z] [2024-11-26 20:27:05.847925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:12.326 [2024-11-26 20:27:05.848700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:12.894 [2024-11-26 20:27:06.171125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:12.894 [2024-11-26 20:27:06.382638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:12.894 [2024-11-26 20:27:06.383412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.153 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.153 "name": "raid_bdev1", 00:14:13.153 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:13.153 "strip_size_kb": 0, 00:14:13.153 "state": "online", 00:14:13.153 "raid_level": "raid1", 00:14:13.153 "superblock": false, 00:14:13.153 "num_base_bdevs": 4, 00:14:13.153 "num_base_bdevs_discovered": 3, 00:14:13.153 "num_base_bdevs_operational": 3, 00:14:13.153 "process": { 00:14:13.153 "type": "rebuild", 00:14:13.153 "target": "spare", 00:14:13.153 "progress": { 00:14:13.153 "blocks": 28672, 00:14:13.153 "percent": 43 00:14:13.153 } 00:14:13.153 }, 00:14:13.153 "base_bdevs_list": [ 00:14:13.154 { 00:14:13.154 "name": "spare", 00:14:13.154 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:13.154 "is_configured": true, 00:14:13.154 "data_offset": 0, 00:14:13.154 "data_size": 65536 00:14:13.154 }, 00:14:13.154 { 00:14:13.154 "name": null, 00:14:13.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.154 "is_configured": false, 00:14:13.154 "data_offset": 0, 00:14:13.154 "data_size": 65536 00:14:13.154 }, 00:14:13.154 { 00:14:13.154 "name": "BaseBdev3", 00:14:13.154 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:13.154 "is_configured": true, 00:14:13.154 "data_offset": 0, 00:14:13.154 "data_size": 65536 00:14:13.154 }, 00:14:13.154 { 00:14:13.154 "name": "BaseBdev4", 00:14:13.154 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:13.154 "is_configured": true, 00:14:13.154 "data_offset": 0, 00:14:13.154 "data_size": 65536 00:14:13.154 } 00:14:13.154 ] 00:14:13.154 }' 00:14:13.154 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.154 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.154 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.154 93.00 IOPS, 279.00 MiB/s [2024-11-26T20:27:06.706Z] 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.154 20:27:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.414 [2024-11-26 20:27:06.728221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:13.414 [2024-11-26 20:27:06.728986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:13.414 [2024-11-26 20:27:06.948961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:13.982 [2024-11-26 20:27:07.280561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:14.242 82.83 IOPS, 248.50 MiB/s [2024-11-26T20:27:07.794Z] 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.242 "name": "raid_bdev1", 00:14:14.242 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:14.242 "strip_size_kb": 0, 00:14:14.242 "state": "online", 00:14:14.242 "raid_level": "raid1", 00:14:14.242 "superblock": false, 00:14:14.242 "num_base_bdevs": 4, 00:14:14.242 "num_base_bdevs_discovered": 3, 00:14:14.242 "num_base_bdevs_operational": 3, 00:14:14.242 "process": { 00:14:14.242 "type": "rebuild", 00:14:14.242 "target": "spare", 00:14:14.242 "progress": { 00:14:14.242 "blocks": 45056, 00:14:14.242 "percent": 68 00:14:14.242 } 00:14:14.242 }, 00:14:14.242 "base_bdevs_list": [ 00:14:14.242 { 00:14:14.242 "name": "spare", 00:14:14.242 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:14.242 "is_configured": true, 00:14:14.242 "data_offset": 0, 00:14:14.242 "data_size": 65536 00:14:14.242 }, 00:14:14.242 { 00:14:14.242 "name": null, 00:14:14.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.242 "is_configured": false, 00:14:14.242 "data_offset": 0, 00:14:14.242 "data_size": 65536 00:14:14.242 }, 00:14:14.242 { 00:14:14.242 "name": "BaseBdev3", 00:14:14.242 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:14.242 "is_configured": true, 00:14:14.242 "data_offset": 0, 00:14:14.242 "data_size": 65536 00:14:14.242 }, 00:14:14.242 { 00:14:14.242 "name": "BaseBdev4", 00:14:14.242 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:14.242 "is_configured": true, 00:14:14.242 "data_offset": 0, 00:14:14.242 "data_size": 65536 00:14:14.242 } 00:14:14.242 ] 00:14:14.242 }' 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.242 [2024-11-26 20:27:07.751820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:14.242 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.502 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.502 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.502 20:27:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.448 77.43 IOPS, 232.29 MiB/s [2024-11-26T20:27:09.000Z] 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.448 [2024-11-26 20:27:08.863962] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.448 "name": "raid_bdev1", 00:14:15.448 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:15.448 "strip_size_kb": 0, 00:14:15.448 "state": "online", 00:14:15.448 "raid_level": "raid1", 00:14:15.448 "superblock": false, 00:14:15.448 "num_base_bdevs": 4, 00:14:15.448 "num_base_bdevs_discovered": 3, 00:14:15.448 "num_base_bdevs_operational": 3, 00:14:15.448 "process": { 00:14:15.448 "type": "rebuild", 00:14:15.448 "target": "spare", 00:14:15.448 "progress": { 00:14:15.448 "blocks": 63488, 00:14:15.448 "percent": 96 00:14:15.448 } 00:14:15.448 }, 00:14:15.448 "base_bdevs_list": [ 00:14:15.448 { 00:14:15.448 "name": "spare", 00:14:15.448 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:15.448 "is_configured": true, 00:14:15.448 "data_offset": 0, 00:14:15.448 "data_size": 65536 00:14:15.448 }, 00:14:15.448 { 00:14:15.448 "name": null, 00:14:15.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.448 "is_configured": false, 00:14:15.448 "data_offset": 0, 00:14:15.448 "data_size": 65536 00:14:15.448 }, 00:14:15.448 { 00:14:15.448 "name": "BaseBdev3", 00:14:15.448 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:15.448 "is_configured": true, 00:14:15.448 "data_offset": 0, 00:14:15.448 "data_size": 65536 00:14:15.448 }, 00:14:15.448 { 00:14:15.448 "name": "BaseBdev4", 00:14:15.448 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:15.448 "is_configured": true, 00:14:15.448 "data_offset": 0, 00:14:15.448 "data_size": 65536 00:14:15.448 } 00:14:15.448 ] 00:14:15.448 }' 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.448 20:27:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.448 [2024-11-26 20:27:08.963776] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:15.448 [2024-11-26 20:27:08.967412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.643 71.00 IOPS, 213.00 MiB/s [2024-11-26T20:27:10.195Z] 20:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.643 20:27:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.643 "name": "raid_bdev1", 00:14:16.643 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:16.643 "strip_size_kb": 0, 00:14:16.643 "state": "online", 00:14:16.643 "raid_level": "raid1", 00:14:16.643 "superblock": false, 00:14:16.643 "num_base_bdevs": 4, 00:14:16.643 "num_base_bdevs_discovered": 3, 00:14:16.643 "num_base_bdevs_operational": 3, 00:14:16.643 "base_bdevs_list": [ 00:14:16.643 { 00:14:16.643 "name": "spare", 00:14:16.643 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:16.643 "is_configured": true, 00:14:16.643 "data_offset": 0, 00:14:16.643 "data_size": 65536 00:14:16.643 }, 00:14:16.643 { 00:14:16.643 "name": null, 00:14:16.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.643 "is_configured": false, 00:14:16.643 "data_offset": 0, 00:14:16.643 "data_size": 65536 00:14:16.643 }, 00:14:16.643 { 00:14:16.643 "name": "BaseBdev3", 00:14:16.643 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:16.643 "is_configured": true, 00:14:16.643 "data_offset": 0, 00:14:16.643 "data_size": 65536 00:14:16.643 }, 00:14:16.643 { 00:14:16.643 "name": "BaseBdev4", 00:14:16.643 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:16.643 "is_configured": true, 00:14:16.643 "data_offset": 0, 00:14:16.643 "data_size": 65536 00:14:16.643 } 00:14:16.643 ] 00:14:16.643 }' 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.643 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.643 "name": "raid_bdev1", 00:14:16.643 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:16.643 "strip_size_kb": 0, 00:14:16.644 "state": "online", 00:14:16.644 "raid_level": "raid1", 00:14:16.644 "superblock": false, 00:14:16.644 "num_base_bdevs": 4, 00:14:16.644 "num_base_bdevs_discovered": 3, 00:14:16.644 "num_base_bdevs_operational": 3, 00:14:16.644 "base_bdevs_list": [ 00:14:16.644 { 00:14:16.644 "name": "spare", 00:14:16.644 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:16.644 "is_configured": true, 00:14:16.644 "data_offset": 0, 00:14:16.644 "data_size": 65536 00:14:16.644 }, 00:14:16.644 { 00:14:16.644 "name": null, 00:14:16.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.644 "is_configured": false, 00:14:16.644 "data_offset": 0, 00:14:16.644 "data_size": 65536 00:14:16.644 }, 00:14:16.644 { 00:14:16.644 "name": "BaseBdev3", 00:14:16.644 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:16.644 "is_configured": true, 00:14:16.644 "data_offset": 0, 00:14:16.644 "data_size": 65536 00:14:16.644 }, 00:14:16.644 { 00:14:16.644 "name": "BaseBdev4", 00:14:16.644 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:16.644 "is_configured": true, 00:14:16.644 "data_offset": 0, 00:14:16.644 "data_size": 65536 00:14:16.644 } 00:14:16.644 ] 00:14:16.644 }' 00:14:16.644 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.903 "name": "raid_bdev1", 00:14:16.903 "uuid": "809a0304-d030-479b-843b-772094fc3ea7", 00:14:16.903 "strip_size_kb": 0, 00:14:16.903 "state": "online", 00:14:16.903 "raid_level": "raid1", 00:14:16.903 "superblock": false, 00:14:16.903 "num_base_bdevs": 4, 00:14:16.903 "num_base_bdevs_discovered": 3, 00:14:16.903 "num_base_bdevs_operational": 3, 00:14:16.903 "base_bdevs_list": [ 00:14:16.903 { 00:14:16.903 "name": "spare", 00:14:16.903 "uuid": "e9934434-86a4-5b8a-bc5e-a2dff413079d", 00:14:16.903 "is_configured": true, 00:14:16.903 "data_offset": 0, 00:14:16.903 "data_size": 65536 00:14:16.903 }, 00:14:16.903 { 00:14:16.903 "name": null, 00:14:16.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.903 "is_configured": false, 00:14:16.903 "data_offset": 0, 00:14:16.903 "data_size": 65536 00:14:16.903 }, 00:14:16.903 { 00:14:16.903 "name": "BaseBdev3", 00:14:16.903 "uuid": "4859b512-858d-5ea6-8b37-5d060952352b", 00:14:16.903 "is_configured": true, 00:14:16.903 "data_offset": 0, 00:14:16.903 "data_size": 65536 00:14:16.903 }, 00:14:16.903 { 00:14:16.903 "name": "BaseBdev4", 00:14:16.903 "uuid": "458d7bda-4814-56d8-ab3c-16b93ca0c5ad", 00:14:16.903 "is_configured": true, 00:14:16.903 "data_offset": 0, 00:14:16.903 "data_size": 65536 00:14:16.903 } 00:14:16.903 ] 00:14:16.903 }' 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.903 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.162 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.162 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.162 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.162 [2024-11-26 20:27:10.662539] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.162 [2024-11-26 20:27:10.662572] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.422 66.44 IOPS, 199.33 MiB/s 00:14:17.422 Latency(us) 00:14:17.422 [2024-11-26T20:27:10.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.422 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:17.422 raid_bdev1 : 9.09 66.01 198.04 0.00 0.00 21763.99 305.86 113099.68 00:14:17.422 [2024-11-26T20:27:10.975Z] =================================================================================================================== 00:14:17.423 [2024-11-26T20:27:10.975Z] Total : 66.01 198.04 0.00 0.00 21763.99 305.86 113099.68 00:14:17.423 [2024-11-26 20:27:10.718482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.423 [2024-11-26 20:27:10.718531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.423 [2024-11-26 20:27:10.718646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.423 [2024-11-26 20:27:10.718657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:17.423 { 00:14:17.423 "results": [ 00:14:17.423 { 00:14:17.423 "job": "raid_bdev1", 00:14:17.423 "core_mask": "0x1", 00:14:17.423 "workload": "randrw", 00:14:17.423 "percentage": 50, 00:14:17.423 "status": "finished", 00:14:17.423 "queue_depth": 2, 00:14:17.423 "io_size": 3145728, 00:14:17.423 "runtime": 9.089014, 00:14:17.423 "iops": 66.01376122866573, 00:14:17.423 "mibps": 198.0412836859972, 00:14:17.423 "io_failed": 0, 00:14:17.423 "io_timeout": 0, 00:14:17.423 "avg_latency_us": 21763.991662299853, 00:14:17.423 "min_latency_us": 305.8585152838428, 00:14:17.423 "max_latency_us": 113099.68209606987 00:14:17.423 } 00:14:17.423 ], 00:14:17.423 "core_count": 1 00:14:17.423 } 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.423 20:27:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:17.682 /dev/nbd0 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:17.682 1+0 records in 00:14:17.682 1+0 records out 00:14:17.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431693 s, 9.5 MB/s 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:17.682 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.683 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:17.941 /dev/nbd1 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:17.941 1+0 records in 00:14:17.941 1+0 records out 00:14:17.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299691 s, 13.7 MB/s 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.941 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.200 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:18.459 /dev/nbd1 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.459 1+0 records in 00:14:18.459 1+0 records out 00:14:18.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326719 s, 12.5 MB/s 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:18.459 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.460 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.460 20:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:18.718 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89898 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89898 ']' 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89898 00:14:18.975 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:19.232 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.232 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89898 00:14:19.232 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:19.232 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:19.232 killing process with pid 89898 00:14:19.232 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89898' 00:14:19.232 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89898 00:14:19.232 Received shutdown signal, test time was about 10.941632 seconds 00:14:19.232 00:14:19.232 Latency(us) 00:14:19.232 [2024-11-26T20:27:12.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.232 [2024-11-26T20:27:12.784Z] =================================================================================================================== 00:14:19.232 [2024-11-26T20:27:12.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:19.232 [2024-11-26 20:27:12.563429] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.232 20:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89898 00:14:19.232 [2024-11-26 20:27:12.639105] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.490 20:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:19.490 00:14:19.490 real 0m13.204s 00:14:19.490 user 0m16.881s 00:14:19.490 sys 0m1.963s 00:14:19.490 20:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.490 20:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.490 ************************************ 00:14:19.490 END TEST raid_rebuild_test_io 00:14:19.490 ************************************ 00:14:19.748 20:27:13 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:19.748 20:27:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:19.748 20:27:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.748 20:27:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.748 ************************************ 00:14:19.748 START TEST raid_rebuild_test_sb_io 00:14:19.748 ************************************ 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:19.748 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90314 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90314 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 90314 ']' 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.749 20:27:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.749 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:19.749 Zero copy mechanism will not be used. 00:14:19.749 [2024-11-26 20:27:13.164540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:19.749 [2024-11-26 20:27:13.164697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90314 ] 00:14:20.007 [2024-11-26 20:27:13.330055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.007 [2024-11-26 20:27:13.416864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.007 [2024-11-26 20:27:13.494071] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.007 [2024-11-26 20:27:13.494125] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 BaseBdev1_malloc 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.573 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 [2024-11-26 20:27:14.124555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:20.832 [2024-11-26 20:27:14.124636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.832 [2024-11-26 20:27:14.124673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:20.832 [2024-11-26 20:27:14.124692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.832 [2024-11-26 20:27:14.127356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.832 [2024-11-26 20:27:14.127395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.832 BaseBdev1 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 BaseBdev2_malloc 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 [2024-11-26 20:27:14.167289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:20.832 [2024-11-26 20:27:14.167380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.832 [2024-11-26 20:27:14.167416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:20.832 [2024-11-26 20:27:14.167430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.832 [2024-11-26 20:27:14.170403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.832 [2024-11-26 20:27:14.170450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:20.832 BaseBdev2 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 BaseBdev3_malloc 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 [2024-11-26 20:27:14.199764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:20.832 [2024-11-26 20:27:14.199833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.832 [2024-11-26 20:27:14.199868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:20.832 [2024-11-26 20:27:14.199879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.832 [2024-11-26 20:27:14.202451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.832 [2024-11-26 20:27:14.202498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:20.832 BaseBdev3 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 BaseBdev4_malloc 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 [2024-11-26 20:27:14.235889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:20.832 [2024-11-26 20:27:14.235976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.832 [2024-11-26 20:27:14.236010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:20.832 [2024-11-26 20:27:14.236021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.832 [2024-11-26 20:27:14.238719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.832 [2024-11-26 20:27:14.238771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:20.832 BaseBdev4 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 spare_malloc 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.832 spare_delay 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.832 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.833 [2024-11-26 20:27:14.279150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.833 [2024-11-26 20:27:14.279225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.833 [2024-11-26 20:27:14.279256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:20.833 [2024-11-26 20:27:14.279267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.833 [2024-11-26 20:27:14.281773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.833 [2024-11-26 20:27:14.281820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.833 spare 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.833 [2024-11-26 20:27:14.291256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.833 [2024-11-26 20:27:14.293386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.833 [2024-11-26 20:27:14.293481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.833 [2024-11-26 20:27:14.293533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:20.833 [2024-11-26 20:27:14.293739] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:20.833 [2024-11-26 20:27:14.293761] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.833 [2024-11-26 20:27:14.294077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:20.833 [2024-11-26 20:27:14.294273] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:20.833 [2024-11-26 20:27:14.294295] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:20.833 [2024-11-26 20:27:14.294466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.833 "name": "raid_bdev1", 00:14:20.833 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:20.833 "strip_size_kb": 0, 00:14:20.833 "state": "online", 00:14:20.833 "raid_level": "raid1", 00:14:20.833 "superblock": true, 00:14:20.833 "num_base_bdevs": 4, 00:14:20.833 "num_base_bdevs_discovered": 4, 00:14:20.833 "num_base_bdevs_operational": 4, 00:14:20.833 "base_bdevs_list": [ 00:14:20.833 { 00:14:20.833 "name": "BaseBdev1", 00:14:20.833 "uuid": "348ec4b2-7257-582f-933b-90d6779bc03d", 00:14:20.833 "is_configured": true, 00:14:20.833 "data_offset": 2048, 00:14:20.833 "data_size": 63488 00:14:20.833 }, 00:14:20.833 { 00:14:20.833 "name": "BaseBdev2", 00:14:20.833 "uuid": "0742eb18-845e-5986-acc6-d5c15e06b42b", 00:14:20.833 "is_configured": true, 00:14:20.833 "data_offset": 2048, 00:14:20.833 "data_size": 63488 00:14:20.833 }, 00:14:20.833 { 00:14:20.833 "name": "BaseBdev3", 00:14:20.833 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:20.833 "is_configured": true, 00:14:20.833 "data_offset": 2048, 00:14:20.833 "data_size": 63488 00:14:20.833 }, 00:14:20.833 { 00:14:20.833 "name": "BaseBdev4", 00:14:20.833 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:20.833 "is_configured": true, 00:14:20.833 "data_offset": 2048, 00:14:20.833 "data_size": 63488 00:14:20.833 } 00:14:20.833 ] 00:14:20.833 }' 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.833 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 [2024-11-26 20:27:14.766834] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 [2024-11-26 20:27:14.854235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.400 "name": "raid_bdev1", 00:14:21.400 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:21.400 "strip_size_kb": 0, 00:14:21.400 "state": "online", 00:14:21.400 "raid_level": "raid1", 00:14:21.400 "superblock": true, 00:14:21.400 "num_base_bdevs": 4, 00:14:21.400 "num_base_bdevs_discovered": 3, 00:14:21.400 "num_base_bdevs_operational": 3, 00:14:21.400 "base_bdevs_list": [ 00:14:21.400 { 00:14:21.400 "name": null, 00:14:21.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.400 "is_configured": false, 00:14:21.400 "data_offset": 0, 00:14:21.400 "data_size": 63488 00:14:21.400 }, 00:14:21.400 { 00:14:21.400 "name": "BaseBdev2", 00:14:21.400 "uuid": "0742eb18-845e-5986-acc6-d5c15e06b42b", 00:14:21.400 "is_configured": true, 00:14:21.400 "data_offset": 2048, 00:14:21.400 "data_size": 63488 00:14:21.400 }, 00:14:21.400 { 00:14:21.400 "name": "BaseBdev3", 00:14:21.400 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:21.400 "is_configured": true, 00:14:21.400 "data_offset": 2048, 00:14:21.400 "data_size": 63488 00:14:21.400 }, 00:14:21.400 { 00:14:21.400 "name": "BaseBdev4", 00:14:21.400 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:21.400 "is_configured": true, 00:14:21.400 "data_offset": 2048, 00:14:21.400 "data_size": 63488 00:14:21.400 } 00:14:21.400 ] 00:14:21.400 }' 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.400 20:27:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.659 [2024-11-26 20:27:14.952185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:21.659 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:21.659 Zero copy mechanism will not be used. 00:14:21.659 Running I/O for 60 seconds... 00:14:21.918 20:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.918 20:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.918 20:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.918 [2024-11-26 20:27:15.314109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.918 20:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.918 20:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:21.918 [2024-11-26 20:27:15.351885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:21.918 [2024-11-26 20:27:15.354262] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.176 [2024-11-26 20:27:15.484689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:22.176 [2024-11-26 20:27:15.487022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:22.434 [2024-11-26 20:27:15.734150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:22.726 161.00 IOPS, 483.00 MiB/s [2024-11-26T20:27:16.278Z] [2024-11-26 20:27:16.224245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:22.726 [2024-11-26 20:27:16.224639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.998 "name": "raid_bdev1", 00:14:22.998 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:22.998 "strip_size_kb": 0, 00:14:22.998 "state": "online", 00:14:22.998 "raid_level": "raid1", 00:14:22.998 "superblock": true, 00:14:22.998 "num_base_bdevs": 4, 00:14:22.998 "num_base_bdevs_discovered": 4, 00:14:22.998 "num_base_bdevs_operational": 4, 00:14:22.998 "process": { 00:14:22.998 "type": "rebuild", 00:14:22.998 "target": "spare", 00:14:22.998 "progress": { 00:14:22.998 "blocks": 12288, 00:14:22.998 "percent": 19 00:14:22.998 } 00:14:22.998 }, 00:14:22.998 "base_bdevs_list": [ 00:14:22.998 { 00:14:22.998 "name": "spare", 00:14:22.998 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:22.998 "is_configured": true, 00:14:22.998 "data_offset": 2048, 00:14:22.998 "data_size": 63488 00:14:22.998 }, 00:14:22.998 { 00:14:22.998 "name": "BaseBdev2", 00:14:22.998 "uuid": "0742eb18-845e-5986-acc6-d5c15e06b42b", 00:14:22.998 "is_configured": true, 00:14:22.998 "data_offset": 2048, 00:14:22.998 "data_size": 63488 00:14:22.998 }, 00:14:22.998 { 00:14:22.998 "name": "BaseBdev3", 00:14:22.998 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:22.998 "is_configured": true, 00:14:22.998 "data_offset": 2048, 00:14:22.998 "data_size": 63488 00:14:22.998 }, 00:14:22.998 { 00:14:22.998 "name": "BaseBdev4", 00:14:22.998 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:22.998 "is_configured": true, 00:14:22.998 "data_offset": 2048, 00:14:22.998 "data_size": 63488 00:14:22.998 } 00:14:22.998 ] 00:14:22.998 }' 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.998 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.999 [2024-11-26 20:27:16.478868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:22.999 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.999 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:22.999 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.999 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.999 [2024-11-26 20:27:16.503165] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.258 [2024-11-26 20:27:16.591482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:23.258 [2024-11-26 20:27:16.592153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:23.258 [2024-11-26 20:27:16.593586] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:23.258 [2024-11-26 20:27:16.613296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.258 [2024-11-26 20:27:16.613372] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.258 [2024-11-26 20:27:16.613388] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:23.258 [2024-11-26 20:27:16.634394] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.258 "name": "raid_bdev1", 00:14:23.258 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:23.258 "strip_size_kb": 0, 00:14:23.258 "state": "online", 00:14:23.258 "raid_level": "raid1", 00:14:23.258 "superblock": true, 00:14:23.258 "num_base_bdevs": 4, 00:14:23.258 "num_base_bdevs_discovered": 3, 00:14:23.258 "num_base_bdevs_operational": 3, 00:14:23.258 "base_bdevs_list": [ 00:14:23.258 { 00:14:23.258 "name": null, 00:14:23.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.258 "is_configured": false, 00:14:23.258 "data_offset": 0, 00:14:23.258 "data_size": 63488 00:14:23.258 }, 00:14:23.258 { 00:14:23.258 "name": "BaseBdev2", 00:14:23.258 "uuid": "0742eb18-845e-5986-acc6-d5c15e06b42b", 00:14:23.258 "is_configured": true, 00:14:23.258 "data_offset": 2048, 00:14:23.258 "data_size": 63488 00:14:23.258 }, 00:14:23.258 { 00:14:23.258 "name": "BaseBdev3", 00:14:23.258 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:23.258 "is_configured": true, 00:14:23.258 "data_offset": 2048, 00:14:23.258 "data_size": 63488 00:14:23.258 }, 00:14:23.258 { 00:14:23.258 "name": "BaseBdev4", 00:14:23.258 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:23.258 "is_configured": true, 00:14:23.258 "data_offset": 2048, 00:14:23.258 "data_size": 63488 00:14:23.258 } 00:14:23.258 ] 00:14:23.258 }' 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.258 20:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.776 144.00 IOPS, 432.00 MiB/s [2024-11-26T20:27:17.328Z] 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.776 "name": "raid_bdev1", 00:14:23.776 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:23.776 "strip_size_kb": 0, 00:14:23.776 "state": "online", 00:14:23.776 "raid_level": "raid1", 00:14:23.776 "superblock": true, 00:14:23.776 "num_base_bdevs": 4, 00:14:23.776 "num_base_bdevs_discovered": 3, 00:14:23.776 "num_base_bdevs_operational": 3, 00:14:23.776 "base_bdevs_list": [ 00:14:23.776 { 00:14:23.776 "name": null, 00:14:23.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.776 "is_configured": false, 00:14:23.776 "data_offset": 0, 00:14:23.776 "data_size": 63488 00:14:23.776 }, 00:14:23.776 { 00:14:23.776 "name": "BaseBdev2", 00:14:23.776 "uuid": "0742eb18-845e-5986-acc6-d5c15e06b42b", 00:14:23.776 "is_configured": true, 00:14:23.776 "data_offset": 2048, 00:14:23.776 "data_size": 63488 00:14:23.776 }, 00:14:23.776 { 00:14:23.776 "name": "BaseBdev3", 00:14:23.776 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:23.776 "is_configured": true, 00:14:23.776 "data_offset": 2048, 00:14:23.776 "data_size": 63488 00:14:23.776 }, 00:14:23.776 { 00:14:23.776 "name": "BaseBdev4", 00:14:23.776 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:23.776 "is_configured": true, 00:14:23.776 "data_offset": 2048, 00:14:23.776 "data_size": 63488 00:14:23.776 } 00:14:23.776 ] 00:14:23.776 }' 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.776 [2024-11-26 20:27:17.256972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.776 20:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:23.776 [2024-11-26 20:27:17.309789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:23.776 [2024-11-26 20:27:17.312017] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.035 [2024-11-26 20:27:17.423359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:24.035 [2024-11-26 20:27:17.424039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:24.295 [2024-11-26 20:27:17.655553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:24.295 [2024-11-26 20:27:17.656724] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:24.554 154.67 IOPS, 464.00 MiB/s [2024-11-26T20:27:18.106Z] [2024-11-26 20:27:18.067362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:24.814 [2024-11-26 20:27:18.187805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:24.814 [2024-11-26 20:27:18.188205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.814 "name": "raid_bdev1", 00:14:24.814 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:24.814 "strip_size_kb": 0, 00:14:24.814 "state": "online", 00:14:24.814 "raid_level": "raid1", 00:14:24.814 "superblock": true, 00:14:24.814 "num_base_bdevs": 4, 00:14:24.814 "num_base_bdevs_discovered": 4, 00:14:24.814 "num_base_bdevs_operational": 4, 00:14:24.814 "process": { 00:14:24.814 "type": "rebuild", 00:14:24.814 "target": "spare", 00:14:24.814 "progress": { 00:14:24.814 "blocks": 10240, 00:14:24.814 "percent": 16 00:14:24.814 } 00:14:24.814 }, 00:14:24.814 "base_bdevs_list": [ 00:14:24.814 { 00:14:24.814 "name": "spare", 00:14:24.814 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:24.814 "is_configured": true, 00:14:24.814 "data_offset": 2048, 00:14:24.814 "data_size": 63488 00:14:24.814 }, 00:14:24.814 { 00:14:24.814 "name": "BaseBdev2", 00:14:24.814 "uuid": "0742eb18-845e-5986-acc6-d5c15e06b42b", 00:14:24.814 "is_configured": true, 00:14:24.814 "data_offset": 2048, 00:14:24.814 "data_size": 63488 00:14:24.814 }, 00:14:24.814 { 00:14:24.814 "name": "BaseBdev3", 00:14:24.814 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:24.814 "is_configured": true, 00:14:24.814 "data_offset": 2048, 00:14:24.814 "data_size": 63488 00:14:24.814 }, 00:14:24.814 { 00:14:24.814 "name": "BaseBdev4", 00:14:24.814 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:24.814 "is_configured": true, 00:14:24.814 "data_offset": 2048, 00:14:24.814 "data_size": 63488 00:14:24.814 } 00:14:24.814 ] 00:14:24.814 }' 00:14:24.814 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:25.074 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.074 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.074 [2024-11-26 20:27:18.453681] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.334 [2024-11-26 20:27:18.733015] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:14:25.334 [2024-11-26 20:27:18.733071] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.334 "name": "raid_bdev1", 00:14:25.334 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:25.334 "strip_size_kb": 0, 00:14:25.334 "state": "online", 00:14:25.334 "raid_level": "raid1", 00:14:25.334 "superblock": true, 00:14:25.334 "num_base_bdevs": 4, 00:14:25.334 "num_base_bdevs_discovered": 3, 00:14:25.334 "num_base_bdevs_operational": 3, 00:14:25.334 "process": { 00:14:25.334 "type": "rebuild", 00:14:25.334 "target": "spare", 00:14:25.334 "progress": { 00:14:25.334 "blocks": 14336, 00:14:25.334 "percent": 22 00:14:25.334 } 00:14:25.334 }, 00:14:25.334 "base_bdevs_list": [ 00:14:25.334 { 00:14:25.334 "name": "spare", 00:14:25.334 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:25.334 "is_configured": true, 00:14:25.334 "data_offset": 2048, 00:14:25.334 "data_size": 63488 00:14:25.334 }, 00:14:25.334 { 00:14:25.334 "name": null, 00:14:25.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.334 "is_configured": false, 00:14:25.334 "data_offset": 0, 00:14:25.334 "data_size": 63488 00:14:25.334 }, 00:14:25.334 { 00:14:25.334 "name": "BaseBdev3", 00:14:25.334 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:25.334 "is_configured": true, 00:14:25.334 "data_offset": 2048, 00:14:25.334 "data_size": 63488 00:14:25.334 }, 00:14:25.334 { 00:14:25.334 "name": "BaseBdev4", 00:14:25.334 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:25.334 "is_configured": true, 00:14:25.334 "data_offset": 2048, 00:14:25.334 "data_size": 63488 00:14:25.334 } 00:14:25.334 ] 00:14:25.334 }' 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.334 [2024-11-26 20:27:18.847017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:25.334 [2024-11-26 20:27:18.847375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=427 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.334 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.593 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.593 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.593 "name": "raid_bdev1", 00:14:25.593 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:25.593 "strip_size_kb": 0, 00:14:25.593 "state": "online", 00:14:25.593 "raid_level": "raid1", 00:14:25.593 "superblock": true, 00:14:25.593 "num_base_bdevs": 4, 00:14:25.593 "num_base_bdevs_discovered": 3, 00:14:25.593 "num_base_bdevs_operational": 3, 00:14:25.593 "process": { 00:14:25.593 "type": "rebuild", 00:14:25.593 "target": "spare", 00:14:25.593 "progress": { 00:14:25.593 "blocks": 16384, 00:14:25.593 "percent": 25 00:14:25.593 } 00:14:25.593 }, 00:14:25.593 "base_bdevs_list": [ 00:14:25.593 { 00:14:25.593 "name": "spare", 00:14:25.593 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:25.593 "is_configured": true, 00:14:25.593 "data_offset": 2048, 00:14:25.593 "data_size": 63488 00:14:25.593 }, 00:14:25.593 { 00:14:25.593 "name": null, 00:14:25.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.593 "is_configured": false, 00:14:25.593 "data_offset": 0, 00:14:25.593 "data_size": 63488 00:14:25.593 }, 00:14:25.593 { 00:14:25.593 "name": "BaseBdev3", 00:14:25.593 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:25.593 "is_configured": true, 00:14:25.593 "data_offset": 2048, 00:14:25.593 "data_size": 63488 00:14:25.593 }, 00:14:25.593 { 00:14:25.593 "name": "BaseBdev4", 00:14:25.593 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:25.593 "is_configured": true, 00:14:25.593 "data_offset": 2048, 00:14:25.593 "data_size": 63488 00:14:25.593 } 00:14:25.593 ] 00:14:25.593 }' 00:14:25.593 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.593 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.593 20:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.593 130.50 IOPS, 391.50 MiB/s [2024-11-26T20:27:19.145Z] 20:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.593 20:27:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.851 [2024-11-26 20:27:19.204772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:26.417 [2024-11-26 20:27:19.952566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:26.417 [2024-11-26 20:27:19.953876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:26.674 117.00 IOPS, 351.00 MiB/s [2024-11-26T20:27:20.226Z] 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.674 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.674 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.675 "name": "raid_bdev1", 00:14:26.675 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:26.675 "strip_size_kb": 0, 00:14:26.675 "state": "online", 00:14:26.675 "raid_level": "raid1", 00:14:26.675 "superblock": true, 00:14:26.675 "num_base_bdevs": 4, 00:14:26.675 "num_base_bdevs_discovered": 3, 00:14:26.675 "num_base_bdevs_operational": 3, 00:14:26.675 "process": { 00:14:26.675 "type": "rebuild", 00:14:26.675 "target": "spare", 00:14:26.675 "progress": { 00:14:26.675 "blocks": 32768, 00:14:26.675 "percent": 51 00:14:26.675 } 00:14:26.675 }, 00:14:26.675 "base_bdevs_list": [ 00:14:26.675 { 00:14:26.675 "name": "spare", 00:14:26.675 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:26.675 "is_configured": true, 00:14:26.675 "data_offset": 2048, 00:14:26.675 "data_size": 63488 00:14:26.675 }, 00:14:26.675 { 00:14:26.675 "name": null, 00:14:26.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.675 "is_configured": false, 00:14:26.675 "data_offset": 0, 00:14:26.675 "data_size": 63488 00:14:26.675 }, 00:14:26.675 { 00:14:26.675 "name": "BaseBdev3", 00:14:26.675 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:26.675 "is_configured": true, 00:14:26.675 "data_offset": 2048, 00:14:26.675 "data_size": 63488 00:14:26.675 }, 00:14:26.675 { 00:14:26.675 "name": "BaseBdev4", 00:14:26.675 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:26.675 "is_configured": true, 00:14:26.675 "data_offset": 2048, 00:14:26.675 "data_size": 63488 00:14:26.675 } 00:14:26.675 ] 00:14:26.675 }' 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.675 20:27:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.875 106.50 IOPS, 319.50 MiB/s [2024-11-26T20:27:21.427Z] 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.875 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.875 "name": "raid_bdev1", 00:14:27.875 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:27.875 "strip_size_kb": 0, 00:14:27.875 "state": "online", 00:14:27.875 "raid_level": "raid1", 00:14:27.875 "superblock": true, 00:14:27.875 "num_base_bdevs": 4, 00:14:27.875 "num_base_bdevs_discovered": 3, 00:14:27.875 "num_base_bdevs_operational": 3, 00:14:27.875 "process": { 00:14:27.875 "type": "rebuild", 00:14:27.875 "target": "spare", 00:14:27.875 "progress": { 00:14:27.875 "blocks": 51200, 00:14:27.875 "percent": 80 00:14:27.875 } 00:14:27.875 }, 00:14:27.875 "base_bdevs_list": [ 00:14:27.875 { 00:14:27.875 "name": "spare", 00:14:27.875 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:27.875 "is_configured": true, 00:14:27.875 "data_offset": 2048, 00:14:27.875 "data_size": 63488 00:14:27.875 }, 00:14:27.875 { 00:14:27.875 "name": null, 00:14:27.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.876 "is_configured": false, 00:14:27.876 "data_offset": 0, 00:14:27.876 "data_size": 63488 00:14:27.876 }, 00:14:27.876 { 00:14:27.876 "name": "BaseBdev3", 00:14:27.876 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:27.876 "is_configured": true, 00:14:27.876 "data_offset": 2048, 00:14:27.876 "data_size": 63488 00:14:27.876 }, 00:14:27.876 { 00:14:27.876 "name": "BaseBdev4", 00:14:27.876 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:27.876 "is_configured": true, 00:14:27.876 "data_offset": 2048, 00:14:27.876 "data_size": 63488 00:14:27.876 } 00:14:27.876 ] 00:14:27.876 }' 00:14:27.876 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.876 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.876 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.876 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.876 20:27:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.876 [2024-11-26 20:27:21.416131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:28.134 [2024-11-26 20:27:21.633656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:28.134 [2024-11-26 20:27:21.634035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:28.700 [2024-11-26 20:27:21.967873] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:28.700 96.14 IOPS, 288.43 MiB/s [2024-11-26T20:27:22.252Z] [2024-11-26 20:27:22.065860] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:28.700 [2024-11-26 20:27:22.069285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.960 "name": "raid_bdev1", 00:14:28.960 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:28.960 "strip_size_kb": 0, 00:14:28.960 "state": "online", 00:14:28.960 "raid_level": "raid1", 00:14:28.960 "superblock": true, 00:14:28.960 "num_base_bdevs": 4, 00:14:28.960 "num_base_bdevs_discovered": 3, 00:14:28.960 "num_base_bdevs_operational": 3, 00:14:28.960 "base_bdevs_list": [ 00:14:28.960 { 00:14:28.960 "name": "spare", 00:14:28.960 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:28.960 "is_configured": true, 00:14:28.960 "data_offset": 2048, 00:14:28.960 "data_size": 63488 00:14:28.960 }, 00:14:28.960 { 00:14:28.960 "name": null, 00:14:28.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.960 "is_configured": false, 00:14:28.960 "data_offset": 0, 00:14:28.960 "data_size": 63488 00:14:28.960 }, 00:14:28.960 { 00:14:28.960 "name": "BaseBdev3", 00:14:28.960 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:28.960 "is_configured": true, 00:14:28.960 "data_offset": 2048, 00:14:28.960 "data_size": 63488 00:14:28.960 }, 00:14:28.960 { 00:14:28.960 "name": "BaseBdev4", 00:14:28.960 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:28.960 "is_configured": true, 00:14:28.960 "data_offset": 2048, 00:14:28.960 "data_size": 63488 00:14:28.960 } 00:14:28.960 ] 00:14:28.960 }' 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.960 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.961 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.961 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.961 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.961 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.961 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.961 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.961 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.220 "name": "raid_bdev1", 00:14:29.220 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:29.220 "strip_size_kb": 0, 00:14:29.220 "state": "online", 00:14:29.220 "raid_level": "raid1", 00:14:29.220 "superblock": true, 00:14:29.220 "num_base_bdevs": 4, 00:14:29.220 "num_base_bdevs_discovered": 3, 00:14:29.220 "num_base_bdevs_operational": 3, 00:14:29.220 "base_bdevs_list": [ 00:14:29.220 { 00:14:29.220 "name": "spare", 00:14:29.220 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:29.220 "is_configured": true, 00:14:29.220 "data_offset": 2048, 00:14:29.220 "data_size": 63488 00:14:29.220 }, 00:14:29.220 { 00:14:29.220 "name": null, 00:14:29.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.220 "is_configured": false, 00:14:29.220 "data_offset": 0, 00:14:29.220 "data_size": 63488 00:14:29.220 }, 00:14:29.220 { 00:14:29.220 "name": "BaseBdev3", 00:14:29.220 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:29.220 "is_configured": true, 00:14:29.220 "data_offset": 2048, 00:14:29.220 "data_size": 63488 00:14:29.220 }, 00:14:29.220 { 00:14:29.220 "name": "BaseBdev4", 00:14:29.220 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:29.220 "is_configured": true, 00:14:29.220 "data_offset": 2048, 00:14:29.220 "data_size": 63488 00:14:29.220 } 00:14:29.220 ] 00:14:29.220 }' 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.220 "name": "raid_bdev1", 00:14:29.220 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:29.220 "strip_size_kb": 0, 00:14:29.220 "state": "online", 00:14:29.220 "raid_level": "raid1", 00:14:29.220 "superblock": true, 00:14:29.220 "num_base_bdevs": 4, 00:14:29.220 "num_base_bdevs_discovered": 3, 00:14:29.220 "num_base_bdevs_operational": 3, 00:14:29.220 "base_bdevs_list": [ 00:14:29.220 { 00:14:29.220 "name": "spare", 00:14:29.220 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:29.220 "is_configured": true, 00:14:29.220 "data_offset": 2048, 00:14:29.220 "data_size": 63488 00:14:29.220 }, 00:14:29.220 { 00:14:29.220 "name": null, 00:14:29.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.220 "is_configured": false, 00:14:29.220 "data_offset": 0, 00:14:29.220 "data_size": 63488 00:14:29.220 }, 00:14:29.220 { 00:14:29.220 "name": "BaseBdev3", 00:14:29.220 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:29.220 "is_configured": true, 00:14:29.220 "data_offset": 2048, 00:14:29.220 "data_size": 63488 00:14:29.220 }, 00:14:29.220 { 00:14:29.220 "name": "BaseBdev4", 00:14:29.220 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:29.220 "is_configured": true, 00:14:29.220 "data_offset": 2048, 00:14:29.220 "data_size": 63488 00:14:29.220 } 00:14:29.220 ] 00:14:29.220 }' 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.220 20:27:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.479 91.00 IOPS, 273.00 MiB/s [2024-11-26T20:27:23.031Z] 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:29.479 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.479 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.479 [2024-11-26 20:27:23.016139] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.479 [2024-11-26 20:27:23.016188] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.738 00:14:29.738 Latency(us) 00:14:29.738 [2024-11-26T20:27:23.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.738 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:29.738 raid_bdev1 : 8.13 89.80 269.41 0.00 0.00 15134.60 338.05 122715.44 00:14:29.738 [2024-11-26T20:27:23.290Z] =================================================================================================================== 00:14:29.738 [2024-11-26T20:27:23.290Z] Total : 89.80 269.41 0.00 0.00 15134.60 338.05 122715.44 00:14:29.738 [2024-11-26 20:27:23.072543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.738 [2024-11-26 20:27:23.072635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.738 [2024-11-26 20:27:23.072761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.738 [2024-11-26 20:27:23.072786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:29.738 { 00:14:29.738 "results": [ 00:14:29.738 { 00:14:29.738 "job": "raid_bdev1", 00:14:29.738 "core_mask": "0x1", 00:14:29.738 "workload": "randrw", 00:14:29.738 "percentage": 50, 00:14:29.738 "status": "finished", 00:14:29.738 "queue_depth": 2, 00:14:29.738 "io_size": 3145728, 00:14:29.738 "runtime": 8.128994, 00:14:29.738 "iops": 89.80200994120551, 00:14:29.738 "mibps": 269.40602982361656, 00:14:29.738 "io_failed": 0, 00:14:29.738 "io_timeout": 0, 00:14:29.738 "avg_latency_us": 15134.602390381051, 00:14:29.738 "min_latency_us": 338.05414847161575, 00:14:29.738 "max_latency_us": 122715.44454148471 00:14:29.738 } 00:14:29.738 ], 00:14:29.738 "core_count": 1 00:14:29.738 } 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.738 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:29.998 /dev/nbd0 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.998 1+0 records in 00:14:29.998 1+0 records out 00:14:29.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282774 s, 14.5 MB/s 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.998 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:30.258 /dev/nbd1 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.258 1+0 records in 00:14:30.258 1+0 records out 00:14:30.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268253 s, 15.3 MB/s 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.258 20:27:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:30.516 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:30.779 /dev/nbd1 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:30.779 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.042 1+0 records in 00:14:31.042 1+0 records out 00:14:31.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296527 s, 13.8 MB/s 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.042 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.302 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.562 [2024-11-26 20:27:24.926160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:31.562 [2024-11-26 20:27:24.926220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.562 [2024-11-26 20:27:24.926244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:31.562 [2024-11-26 20:27:24.926257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.562 [2024-11-26 20:27:24.928744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.562 [2024-11-26 20:27:24.928780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:31.562 [2024-11-26 20:27:24.928866] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:31.562 [2024-11-26 20:27:24.928921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.562 [2024-11-26 20:27:24.929056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.562 [2024-11-26 20:27:24.929168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:31.562 spare 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.562 20:27:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.562 [2024-11-26 20:27:25.029094] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:31.562 [2024-11-26 20:27:25.029158] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:31.562 [2024-11-26 20:27:25.029518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:14:31.562 [2024-11-26 20:27:25.029743] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:31.562 [2024-11-26 20:27:25.029765] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:31.562 [2024-11-26 20:27:25.029974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.562 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.562 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.562 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.562 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.562 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.562 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.562 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.562 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.563 "name": "raid_bdev1", 00:14:31.563 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:31.563 "strip_size_kb": 0, 00:14:31.563 "state": "online", 00:14:31.563 "raid_level": "raid1", 00:14:31.563 "superblock": true, 00:14:31.563 "num_base_bdevs": 4, 00:14:31.563 "num_base_bdevs_discovered": 3, 00:14:31.563 "num_base_bdevs_operational": 3, 00:14:31.563 "base_bdevs_list": [ 00:14:31.563 { 00:14:31.563 "name": "spare", 00:14:31.563 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:31.563 "is_configured": true, 00:14:31.563 "data_offset": 2048, 00:14:31.563 "data_size": 63488 00:14:31.563 }, 00:14:31.563 { 00:14:31.563 "name": null, 00:14:31.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.563 "is_configured": false, 00:14:31.563 "data_offset": 2048, 00:14:31.563 "data_size": 63488 00:14:31.563 }, 00:14:31.563 { 00:14:31.563 "name": "BaseBdev3", 00:14:31.563 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:31.563 "is_configured": true, 00:14:31.563 "data_offset": 2048, 00:14:31.563 "data_size": 63488 00:14:31.563 }, 00:14:31.563 { 00:14:31.563 "name": "BaseBdev4", 00:14:31.563 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:31.563 "is_configured": true, 00:14:31.563 "data_offset": 2048, 00:14:31.563 "data_size": 63488 00:14:31.563 } 00:14:31.563 ] 00:14:31.563 }' 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.563 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.132 "name": "raid_bdev1", 00:14:32.132 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:32.132 "strip_size_kb": 0, 00:14:32.132 "state": "online", 00:14:32.132 "raid_level": "raid1", 00:14:32.132 "superblock": true, 00:14:32.132 "num_base_bdevs": 4, 00:14:32.132 "num_base_bdevs_discovered": 3, 00:14:32.132 "num_base_bdevs_operational": 3, 00:14:32.132 "base_bdevs_list": [ 00:14:32.132 { 00:14:32.132 "name": "spare", 00:14:32.132 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:32.132 "is_configured": true, 00:14:32.132 "data_offset": 2048, 00:14:32.132 "data_size": 63488 00:14:32.132 }, 00:14:32.132 { 00:14:32.132 "name": null, 00:14:32.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.132 "is_configured": false, 00:14:32.132 "data_offset": 2048, 00:14:32.132 "data_size": 63488 00:14:32.132 }, 00:14:32.132 { 00:14:32.132 "name": "BaseBdev3", 00:14:32.132 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:32.132 "is_configured": true, 00:14:32.132 "data_offset": 2048, 00:14:32.132 "data_size": 63488 00:14:32.132 }, 00:14:32.132 { 00:14:32.132 "name": "BaseBdev4", 00:14:32.132 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:32.132 "is_configured": true, 00:14:32.132 "data_offset": 2048, 00:14:32.132 "data_size": 63488 00:14:32.132 } 00:14:32.132 ] 00:14:32.132 }' 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.132 [2024-11-26 20:27:25.673125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.132 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.392 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.392 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.392 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.392 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.392 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.392 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.392 "name": "raid_bdev1", 00:14:32.392 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:32.392 "strip_size_kb": 0, 00:14:32.392 "state": "online", 00:14:32.392 "raid_level": "raid1", 00:14:32.392 "superblock": true, 00:14:32.392 "num_base_bdevs": 4, 00:14:32.392 "num_base_bdevs_discovered": 2, 00:14:32.392 "num_base_bdevs_operational": 2, 00:14:32.392 "base_bdevs_list": [ 00:14:32.392 { 00:14:32.392 "name": null, 00:14:32.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.392 "is_configured": false, 00:14:32.392 "data_offset": 0, 00:14:32.392 "data_size": 63488 00:14:32.392 }, 00:14:32.392 { 00:14:32.392 "name": null, 00:14:32.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.392 "is_configured": false, 00:14:32.392 "data_offset": 2048, 00:14:32.392 "data_size": 63488 00:14:32.392 }, 00:14:32.392 { 00:14:32.392 "name": "BaseBdev3", 00:14:32.392 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:32.392 "is_configured": true, 00:14:32.392 "data_offset": 2048, 00:14:32.392 "data_size": 63488 00:14:32.392 }, 00:14:32.392 { 00:14:32.392 "name": "BaseBdev4", 00:14:32.392 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:32.392 "is_configured": true, 00:14:32.392 "data_offset": 2048, 00:14:32.392 "data_size": 63488 00:14:32.392 } 00:14:32.392 ] 00:14:32.392 }' 00:14:32.392 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.392 20:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.651 20:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.651 20:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.651 20:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.651 [2024-11-26 20:27:26.112506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.651 [2024-11-26 20:27:26.112751] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:32.651 [2024-11-26 20:27:26.112777] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:32.651 [2024-11-26 20:27:26.112816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.651 [2024-11-26 20:27:26.116704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:14:32.651 20:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.651 20:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:32.651 [2024-11-26 20:27:26.118965] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.587 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.846 "name": "raid_bdev1", 00:14:33.846 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:33.846 "strip_size_kb": 0, 00:14:33.846 "state": "online", 00:14:33.846 "raid_level": "raid1", 00:14:33.846 "superblock": true, 00:14:33.846 "num_base_bdevs": 4, 00:14:33.846 "num_base_bdevs_discovered": 3, 00:14:33.846 "num_base_bdevs_operational": 3, 00:14:33.846 "process": { 00:14:33.846 "type": "rebuild", 00:14:33.846 "target": "spare", 00:14:33.846 "progress": { 00:14:33.846 "blocks": 20480, 00:14:33.846 "percent": 32 00:14:33.846 } 00:14:33.846 }, 00:14:33.846 "base_bdevs_list": [ 00:14:33.846 { 00:14:33.846 "name": "spare", 00:14:33.846 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:33.846 "is_configured": true, 00:14:33.846 "data_offset": 2048, 00:14:33.846 "data_size": 63488 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "name": null, 00:14:33.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.846 "is_configured": false, 00:14:33.846 "data_offset": 2048, 00:14:33.846 "data_size": 63488 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "name": "BaseBdev3", 00:14:33.846 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:33.846 "is_configured": true, 00:14:33.846 "data_offset": 2048, 00:14:33.846 "data_size": 63488 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "name": "BaseBdev4", 00:14:33.846 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:33.846 "is_configured": true, 00:14:33.846 "data_offset": 2048, 00:14:33.846 "data_size": 63488 00:14:33.846 } 00:14:33.846 ] 00:14:33.846 }' 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.846 [2024-11-26 20:27:27.280814] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.846 [2024-11-26 20:27:27.325683] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.846 [2024-11-26 20:27:27.325768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.846 [2024-11-26 20:27:27.325785] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.846 [2024-11-26 20:27:27.325795] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.846 "name": "raid_bdev1", 00:14:33.846 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:33.846 "strip_size_kb": 0, 00:14:33.846 "state": "online", 00:14:33.846 "raid_level": "raid1", 00:14:33.846 "superblock": true, 00:14:33.846 "num_base_bdevs": 4, 00:14:33.846 "num_base_bdevs_discovered": 2, 00:14:33.846 "num_base_bdevs_operational": 2, 00:14:33.846 "base_bdevs_list": [ 00:14:33.846 { 00:14:33.846 "name": null, 00:14:33.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.846 "is_configured": false, 00:14:33.846 "data_offset": 0, 00:14:33.846 "data_size": 63488 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "name": null, 00:14:33.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.846 "is_configured": false, 00:14:33.846 "data_offset": 2048, 00:14:33.846 "data_size": 63488 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "name": "BaseBdev3", 00:14:33.846 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:33.846 "is_configured": true, 00:14:33.846 "data_offset": 2048, 00:14:33.846 "data_size": 63488 00:14:33.846 }, 00:14:33.846 { 00:14:33.846 "name": "BaseBdev4", 00:14:33.846 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:33.846 "is_configured": true, 00:14:33.846 "data_offset": 2048, 00:14:33.846 "data_size": 63488 00:14:33.846 } 00:14:33.846 ] 00:14:33.846 }' 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.846 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.413 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.413 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.413 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.413 [2024-11-26 20:27:27.766384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.413 [2024-11-26 20:27:27.766456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.413 [2024-11-26 20:27:27.766485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:34.413 [2024-11-26 20:27:27.766497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.413 [2024-11-26 20:27:27.766980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.413 [2024-11-26 20:27:27.767002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.413 [2024-11-26 20:27:27.767096] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:34.413 [2024-11-26 20:27:27.767111] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:34.413 [2024-11-26 20:27:27.767122] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.413 [2024-11-26 20:27:27.767146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.413 [2024-11-26 20:27:27.771019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:34.413 spare 00:14:34.413 [2024-11-26 20:27:27.773151] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.413 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.413 20:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.350 "name": "raid_bdev1", 00:14:35.350 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:35.350 "strip_size_kb": 0, 00:14:35.350 "state": "online", 00:14:35.350 "raid_level": "raid1", 00:14:35.350 "superblock": true, 00:14:35.350 "num_base_bdevs": 4, 00:14:35.350 "num_base_bdevs_discovered": 3, 00:14:35.350 "num_base_bdevs_operational": 3, 00:14:35.350 "process": { 00:14:35.350 "type": "rebuild", 00:14:35.350 "target": "spare", 00:14:35.350 "progress": { 00:14:35.350 "blocks": 20480, 00:14:35.350 "percent": 32 00:14:35.350 } 00:14:35.350 }, 00:14:35.350 "base_bdevs_list": [ 00:14:35.350 { 00:14:35.350 "name": "spare", 00:14:35.350 "uuid": "ea7888ed-60d7-591b-8a65-22550f79f6de", 00:14:35.350 "is_configured": true, 00:14:35.350 "data_offset": 2048, 00:14:35.350 "data_size": 63488 00:14:35.350 }, 00:14:35.350 { 00:14:35.350 "name": null, 00:14:35.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.350 "is_configured": false, 00:14:35.350 "data_offset": 2048, 00:14:35.350 "data_size": 63488 00:14:35.350 }, 00:14:35.350 { 00:14:35.350 "name": "BaseBdev3", 00:14:35.350 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:35.350 "is_configured": true, 00:14:35.350 "data_offset": 2048, 00:14:35.350 "data_size": 63488 00:14:35.350 }, 00:14:35.350 { 00:14:35.350 "name": "BaseBdev4", 00:14:35.350 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:35.350 "is_configured": true, 00:14:35.350 "data_offset": 2048, 00:14:35.350 "data_size": 63488 00:14:35.350 } 00:14:35.350 ] 00:14:35.350 }' 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.350 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.610 [2024-11-26 20:27:28.934205] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.610 [2024-11-26 20:27:28.979772] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.610 [2024-11-26 20:27:28.979871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.610 [2024-11-26 20:27:28.979894] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.610 [2024-11-26 20:27:28.979914] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.610 20:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.610 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.610 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.610 "name": "raid_bdev1", 00:14:35.610 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:35.610 "strip_size_kb": 0, 00:14:35.610 "state": "online", 00:14:35.610 "raid_level": "raid1", 00:14:35.610 "superblock": true, 00:14:35.610 "num_base_bdevs": 4, 00:14:35.610 "num_base_bdevs_discovered": 2, 00:14:35.610 "num_base_bdevs_operational": 2, 00:14:35.610 "base_bdevs_list": [ 00:14:35.610 { 00:14:35.610 "name": null, 00:14:35.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.610 "is_configured": false, 00:14:35.610 "data_offset": 0, 00:14:35.610 "data_size": 63488 00:14:35.610 }, 00:14:35.610 { 00:14:35.610 "name": null, 00:14:35.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.610 "is_configured": false, 00:14:35.610 "data_offset": 2048, 00:14:35.610 "data_size": 63488 00:14:35.610 }, 00:14:35.610 { 00:14:35.610 "name": "BaseBdev3", 00:14:35.610 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:35.610 "is_configured": true, 00:14:35.610 "data_offset": 2048, 00:14:35.610 "data_size": 63488 00:14:35.610 }, 00:14:35.610 { 00:14:35.610 "name": "BaseBdev4", 00:14:35.610 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:35.610 "is_configured": true, 00:14:35.610 "data_offset": 2048, 00:14:35.610 "data_size": 63488 00:14:35.610 } 00:14:35.610 ] 00:14:35.610 }' 00:14:35.610 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.610 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.180 "name": "raid_bdev1", 00:14:36.180 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:36.180 "strip_size_kb": 0, 00:14:36.180 "state": "online", 00:14:36.180 "raid_level": "raid1", 00:14:36.180 "superblock": true, 00:14:36.180 "num_base_bdevs": 4, 00:14:36.180 "num_base_bdevs_discovered": 2, 00:14:36.180 "num_base_bdevs_operational": 2, 00:14:36.180 "base_bdevs_list": [ 00:14:36.180 { 00:14:36.180 "name": null, 00:14:36.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.180 "is_configured": false, 00:14:36.180 "data_offset": 0, 00:14:36.180 "data_size": 63488 00:14:36.180 }, 00:14:36.180 { 00:14:36.180 "name": null, 00:14:36.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.180 "is_configured": false, 00:14:36.180 "data_offset": 2048, 00:14:36.180 "data_size": 63488 00:14:36.180 }, 00:14:36.180 { 00:14:36.180 "name": "BaseBdev3", 00:14:36.180 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:36.180 "is_configured": true, 00:14:36.180 "data_offset": 2048, 00:14:36.180 "data_size": 63488 00:14:36.180 }, 00:14:36.180 { 00:14:36.180 "name": "BaseBdev4", 00:14:36.180 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:36.180 "is_configured": true, 00:14:36.180 "data_offset": 2048, 00:14:36.180 "data_size": 63488 00:14:36.180 } 00:14:36.180 ] 00:14:36.180 }' 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.180 [2024-11-26 20:27:29.588718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.180 [2024-11-26 20:27:29.588788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.180 [2024-11-26 20:27:29.588814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:36.180 [2024-11-26 20:27:29.588825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.180 [2024-11-26 20:27:29.589335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.180 [2024-11-26 20:27:29.589357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.180 [2024-11-26 20:27:29.589449] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:36.180 [2024-11-26 20:27:29.589476] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:36.180 [2024-11-26 20:27:29.589488] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:36.180 [2024-11-26 20:27:29.589502] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:36.180 BaseBdev1 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.180 20:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.120 "name": "raid_bdev1", 00:14:37.120 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:37.120 "strip_size_kb": 0, 00:14:37.120 "state": "online", 00:14:37.120 "raid_level": "raid1", 00:14:37.120 "superblock": true, 00:14:37.120 "num_base_bdevs": 4, 00:14:37.120 "num_base_bdevs_discovered": 2, 00:14:37.120 "num_base_bdevs_operational": 2, 00:14:37.120 "base_bdevs_list": [ 00:14:37.120 { 00:14:37.120 "name": null, 00:14:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.120 "is_configured": false, 00:14:37.120 "data_offset": 0, 00:14:37.120 "data_size": 63488 00:14:37.120 }, 00:14:37.120 { 00:14:37.120 "name": null, 00:14:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.120 "is_configured": false, 00:14:37.120 "data_offset": 2048, 00:14:37.120 "data_size": 63488 00:14:37.120 }, 00:14:37.120 { 00:14:37.120 "name": "BaseBdev3", 00:14:37.120 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:37.120 "is_configured": true, 00:14:37.120 "data_offset": 2048, 00:14:37.120 "data_size": 63488 00:14:37.120 }, 00:14:37.120 { 00:14:37.120 "name": "BaseBdev4", 00:14:37.120 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:37.120 "is_configured": true, 00:14:37.120 "data_offset": 2048, 00:14:37.120 "data_size": 63488 00:14:37.120 } 00:14:37.120 ] 00:14:37.120 }' 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.120 20:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.692 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.692 "name": "raid_bdev1", 00:14:37.692 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:37.692 "strip_size_kb": 0, 00:14:37.692 "state": "online", 00:14:37.692 "raid_level": "raid1", 00:14:37.692 "superblock": true, 00:14:37.692 "num_base_bdevs": 4, 00:14:37.692 "num_base_bdevs_discovered": 2, 00:14:37.692 "num_base_bdevs_operational": 2, 00:14:37.692 "base_bdevs_list": [ 00:14:37.692 { 00:14:37.692 "name": null, 00:14:37.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.692 "is_configured": false, 00:14:37.692 "data_offset": 0, 00:14:37.692 "data_size": 63488 00:14:37.692 }, 00:14:37.692 { 00:14:37.692 "name": null, 00:14:37.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.692 "is_configured": false, 00:14:37.692 "data_offset": 2048, 00:14:37.692 "data_size": 63488 00:14:37.692 }, 00:14:37.692 { 00:14:37.692 "name": "BaseBdev3", 00:14:37.692 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:37.692 "is_configured": true, 00:14:37.692 "data_offset": 2048, 00:14:37.692 "data_size": 63488 00:14:37.692 }, 00:14:37.692 { 00:14:37.692 "name": "BaseBdev4", 00:14:37.692 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:37.692 "is_configured": true, 00:14:37.692 "data_offset": 2048, 00:14:37.693 "data_size": 63488 00:14:37.693 } 00:14:37.693 ] 00:14:37.693 }' 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.693 [2024-11-26 20:27:31.178258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.693 [2024-11-26 20:27:31.178434] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:37.693 [2024-11-26 20:27:31.178462] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:37.693 request: 00:14:37.693 { 00:14:37.693 "base_bdev": "BaseBdev1", 00:14:37.693 "raid_bdev": "raid_bdev1", 00:14:37.693 "method": "bdev_raid_add_base_bdev", 00:14:37.693 "req_id": 1 00:14:37.693 } 00:14:37.693 Got JSON-RPC error response 00:14:37.693 response: 00:14:37.693 { 00:14:37.693 "code": -22, 00:14:37.693 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:37.693 } 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:37.693 20:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.073 "name": "raid_bdev1", 00:14:39.073 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:39.073 "strip_size_kb": 0, 00:14:39.073 "state": "online", 00:14:39.073 "raid_level": "raid1", 00:14:39.073 "superblock": true, 00:14:39.073 "num_base_bdevs": 4, 00:14:39.073 "num_base_bdevs_discovered": 2, 00:14:39.073 "num_base_bdevs_operational": 2, 00:14:39.073 "base_bdevs_list": [ 00:14:39.073 { 00:14:39.073 "name": null, 00:14:39.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.073 "is_configured": false, 00:14:39.073 "data_offset": 0, 00:14:39.073 "data_size": 63488 00:14:39.073 }, 00:14:39.073 { 00:14:39.073 "name": null, 00:14:39.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.073 "is_configured": false, 00:14:39.073 "data_offset": 2048, 00:14:39.073 "data_size": 63488 00:14:39.073 }, 00:14:39.073 { 00:14:39.073 "name": "BaseBdev3", 00:14:39.073 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:39.073 "is_configured": true, 00:14:39.073 "data_offset": 2048, 00:14:39.073 "data_size": 63488 00:14:39.073 }, 00:14:39.073 { 00:14:39.073 "name": "BaseBdev4", 00:14:39.073 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:39.073 "is_configured": true, 00:14:39.073 "data_offset": 2048, 00:14:39.073 "data_size": 63488 00:14:39.073 } 00:14:39.073 ] 00:14:39.073 }' 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.073 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.333 "name": "raid_bdev1", 00:14:39.333 "uuid": "5d636902-9723-4bf5-b33b-201f040a0037", 00:14:39.333 "strip_size_kb": 0, 00:14:39.333 "state": "online", 00:14:39.333 "raid_level": "raid1", 00:14:39.333 "superblock": true, 00:14:39.333 "num_base_bdevs": 4, 00:14:39.333 "num_base_bdevs_discovered": 2, 00:14:39.333 "num_base_bdevs_operational": 2, 00:14:39.333 "base_bdevs_list": [ 00:14:39.333 { 00:14:39.333 "name": null, 00:14:39.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.333 "is_configured": false, 00:14:39.333 "data_offset": 0, 00:14:39.333 "data_size": 63488 00:14:39.333 }, 00:14:39.333 { 00:14:39.333 "name": null, 00:14:39.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.333 "is_configured": false, 00:14:39.333 "data_offset": 2048, 00:14:39.333 "data_size": 63488 00:14:39.333 }, 00:14:39.333 { 00:14:39.333 "name": "BaseBdev3", 00:14:39.333 "uuid": "d289045b-bff5-5db5-859c-1ea5fc1c11e4", 00:14:39.333 "is_configured": true, 00:14:39.333 "data_offset": 2048, 00:14:39.333 "data_size": 63488 00:14:39.333 }, 00:14:39.333 { 00:14:39.333 "name": "BaseBdev4", 00:14:39.333 "uuid": "1f84a435-f5d3-5c98-a6cc-0fdde1f20274", 00:14:39.333 "is_configured": true, 00:14:39.333 "data_offset": 2048, 00:14:39.333 "data_size": 63488 00:14:39.333 } 00:14:39.333 ] 00:14:39.333 }' 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 90314 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 90314 ']' 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 90314 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90314 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.333 killing process with pid 90314 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90314' 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 90314 00:14:39.333 Received shutdown signal, test time was about 17.920345 seconds 00:14:39.333 00:14:39.333 Latency(us) 00:14:39.333 [2024-11-26T20:27:32.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.333 [2024-11-26T20:27:32.885Z] =================================================================================================================== 00:14:39.333 [2024-11-26T20:27:32.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.333 [2024-11-26 20:27:32.840465] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.333 [2024-11-26 20:27:32.840687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.333 20:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 90314 00:14:39.333 [2024-11-26 20:27:32.840779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.333 [2024-11-26 20:27:32.840804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:39.592 [2024-11-26 20:27:32.924351] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.851 20:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:39.851 00:14:39.851 real 0m20.220s 00:14:39.851 user 0m27.082s 00:14:39.851 sys 0m2.633s 00:14:39.851 20:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.851 20:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.851 ************************************ 00:14:39.851 END TEST raid_rebuild_test_sb_io 00:14:39.851 ************************************ 00:14:39.851 20:27:33 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:39.851 20:27:33 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:39.851 20:27:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:39.851 20:27:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.851 20:27:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.851 ************************************ 00:14:39.851 START TEST raid5f_state_function_test 00:14:39.851 ************************************ 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.851 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=91029 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:39.852 Process raid pid: 91029 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91029' 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 91029 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 91029 ']' 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.852 20:27:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.111 [2024-11-26 20:27:33.456031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:40.111 [2024-11-26 20:27:33.456183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.111 [2024-11-26 20:27:33.608078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.370 [2024-11-26 20:27:33.717432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.370 [2024-11-26 20:27:33.790160] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.370 [2024-11-26 20:27:33.790211] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.940 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.940 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:40.940 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:40.940 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.940 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.941 [2024-11-26 20:27:34.439345] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.941 [2024-11-26 20:27:34.439402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.941 [2024-11-26 20:27:34.439419] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.941 [2024-11-26 20:27:34.439429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.941 [2024-11-26 20:27:34.439436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.941 [2024-11-26 20:27:34.439447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.941 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.201 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.201 "name": "Existed_Raid", 00:14:41.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.201 "strip_size_kb": 64, 00:14:41.201 "state": "configuring", 00:14:41.201 "raid_level": "raid5f", 00:14:41.201 "superblock": false, 00:14:41.201 "num_base_bdevs": 3, 00:14:41.201 "num_base_bdevs_discovered": 0, 00:14:41.201 "num_base_bdevs_operational": 3, 00:14:41.201 "base_bdevs_list": [ 00:14:41.201 { 00:14:41.201 "name": "BaseBdev1", 00:14:41.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.201 "is_configured": false, 00:14:41.201 "data_offset": 0, 00:14:41.201 "data_size": 0 00:14:41.201 }, 00:14:41.201 { 00:14:41.201 "name": "BaseBdev2", 00:14:41.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.201 "is_configured": false, 00:14:41.201 "data_offset": 0, 00:14:41.201 "data_size": 0 00:14:41.201 }, 00:14:41.201 { 00:14:41.201 "name": "BaseBdev3", 00:14:41.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.201 "is_configured": false, 00:14:41.201 "data_offset": 0, 00:14:41.201 "data_size": 0 00:14:41.201 } 00:14:41.201 ] 00:14:41.201 }' 00:14:41.201 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.201 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.462 [2024-11-26 20:27:34.882489] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.462 [2024-11-26 20:27:34.882538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.462 [2024-11-26 20:27:34.894513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.462 [2024-11-26 20:27:34.894557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.462 [2024-11-26 20:27:34.894566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.462 [2024-11-26 20:27:34.894576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.462 [2024-11-26 20:27:34.894582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.462 [2024-11-26 20:27:34.894591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.462 [2024-11-26 20:27:34.915997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.462 BaseBdev1 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.462 [ 00:14:41.462 { 00:14:41.462 "name": "BaseBdev1", 00:14:41.462 "aliases": [ 00:14:41.462 "45315b6a-93e4-455f-b6ba-619231714acb" 00:14:41.462 ], 00:14:41.462 "product_name": "Malloc disk", 00:14:41.462 "block_size": 512, 00:14:41.462 "num_blocks": 65536, 00:14:41.462 "uuid": "45315b6a-93e4-455f-b6ba-619231714acb", 00:14:41.462 "assigned_rate_limits": { 00:14:41.462 "rw_ios_per_sec": 0, 00:14:41.462 "rw_mbytes_per_sec": 0, 00:14:41.462 "r_mbytes_per_sec": 0, 00:14:41.462 "w_mbytes_per_sec": 0 00:14:41.462 }, 00:14:41.462 "claimed": true, 00:14:41.462 "claim_type": "exclusive_write", 00:14:41.462 "zoned": false, 00:14:41.462 "supported_io_types": { 00:14:41.462 "read": true, 00:14:41.462 "write": true, 00:14:41.462 "unmap": true, 00:14:41.462 "flush": true, 00:14:41.462 "reset": true, 00:14:41.462 "nvme_admin": false, 00:14:41.462 "nvme_io": false, 00:14:41.462 "nvme_io_md": false, 00:14:41.462 "write_zeroes": true, 00:14:41.462 "zcopy": true, 00:14:41.462 "get_zone_info": false, 00:14:41.462 "zone_management": false, 00:14:41.462 "zone_append": false, 00:14:41.462 "compare": false, 00:14:41.462 "compare_and_write": false, 00:14:41.462 "abort": true, 00:14:41.462 "seek_hole": false, 00:14:41.462 "seek_data": false, 00:14:41.462 "copy": true, 00:14:41.462 "nvme_iov_md": false 00:14:41.462 }, 00:14:41.462 "memory_domains": [ 00:14:41.462 { 00:14:41.462 "dma_device_id": "system", 00:14:41.462 "dma_device_type": 1 00:14:41.462 }, 00:14:41.462 { 00:14:41.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.462 "dma_device_type": 2 00:14:41.462 } 00:14:41.462 ], 00:14:41.462 "driver_specific": {} 00:14:41.462 } 00:14:41.462 ] 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.462 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.463 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.463 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.463 20:27:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.463 20:27:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.463 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.463 "name": "Existed_Raid", 00:14:41.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.463 "strip_size_kb": 64, 00:14:41.463 "state": "configuring", 00:14:41.463 "raid_level": "raid5f", 00:14:41.463 "superblock": false, 00:14:41.463 "num_base_bdevs": 3, 00:14:41.463 "num_base_bdevs_discovered": 1, 00:14:41.463 "num_base_bdevs_operational": 3, 00:14:41.463 "base_bdevs_list": [ 00:14:41.463 { 00:14:41.463 "name": "BaseBdev1", 00:14:41.463 "uuid": "45315b6a-93e4-455f-b6ba-619231714acb", 00:14:41.463 "is_configured": true, 00:14:41.463 "data_offset": 0, 00:14:41.463 "data_size": 65536 00:14:41.463 }, 00:14:41.463 { 00:14:41.463 "name": "BaseBdev2", 00:14:41.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.463 "is_configured": false, 00:14:41.463 "data_offset": 0, 00:14:41.463 "data_size": 0 00:14:41.463 }, 00:14:41.463 { 00:14:41.463 "name": "BaseBdev3", 00:14:41.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.463 "is_configured": false, 00:14:41.463 "data_offset": 0, 00:14:41.463 "data_size": 0 00:14:41.463 } 00:14:41.463 ] 00:14:41.463 }' 00:14:41.463 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.463 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.031 [2024-11-26 20:27:35.435215] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.031 [2024-11-26 20:27:35.435276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.031 [2024-11-26 20:27:35.443231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.031 [2024-11-26 20:27:35.445135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.031 [2024-11-26 20:27:35.445175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.031 [2024-11-26 20:27:35.445185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.031 [2024-11-26 20:27:35.445195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.031 "name": "Existed_Raid", 00:14:42.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.031 "strip_size_kb": 64, 00:14:42.031 "state": "configuring", 00:14:42.031 "raid_level": "raid5f", 00:14:42.031 "superblock": false, 00:14:42.031 "num_base_bdevs": 3, 00:14:42.031 "num_base_bdevs_discovered": 1, 00:14:42.031 "num_base_bdevs_operational": 3, 00:14:42.031 "base_bdevs_list": [ 00:14:42.031 { 00:14:42.031 "name": "BaseBdev1", 00:14:42.031 "uuid": "45315b6a-93e4-455f-b6ba-619231714acb", 00:14:42.031 "is_configured": true, 00:14:42.031 "data_offset": 0, 00:14:42.031 "data_size": 65536 00:14:42.031 }, 00:14:42.031 { 00:14:42.031 "name": "BaseBdev2", 00:14:42.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.031 "is_configured": false, 00:14:42.031 "data_offset": 0, 00:14:42.031 "data_size": 0 00:14:42.031 }, 00:14:42.031 { 00:14:42.031 "name": "BaseBdev3", 00:14:42.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.031 "is_configured": false, 00:14:42.031 "data_offset": 0, 00:14:42.031 "data_size": 0 00:14:42.031 } 00:14:42.031 ] 00:14:42.031 }' 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.031 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.382 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:42.382 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.382 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.672 [2024-11-26 20:27:35.905169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.672 BaseBdev2 00:14:42.672 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.672 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:42.672 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:42.672 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:42.672 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.673 [ 00:14:42.673 { 00:14:42.673 "name": "BaseBdev2", 00:14:42.673 "aliases": [ 00:14:42.673 "0921832e-29e1-4a7c-b7b6-701f7b3c9808" 00:14:42.673 ], 00:14:42.673 "product_name": "Malloc disk", 00:14:42.673 "block_size": 512, 00:14:42.673 "num_blocks": 65536, 00:14:42.673 "uuid": "0921832e-29e1-4a7c-b7b6-701f7b3c9808", 00:14:42.673 "assigned_rate_limits": { 00:14:42.673 "rw_ios_per_sec": 0, 00:14:42.673 "rw_mbytes_per_sec": 0, 00:14:42.673 "r_mbytes_per_sec": 0, 00:14:42.673 "w_mbytes_per_sec": 0 00:14:42.673 }, 00:14:42.673 "claimed": true, 00:14:42.673 "claim_type": "exclusive_write", 00:14:42.673 "zoned": false, 00:14:42.673 "supported_io_types": { 00:14:42.673 "read": true, 00:14:42.673 "write": true, 00:14:42.673 "unmap": true, 00:14:42.673 "flush": true, 00:14:42.673 "reset": true, 00:14:42.673 "nvme_admin": false, 00:14:42.673 "nvme_io": false, 00:14:42.673 "nvme_io_md": false, 00:14:42.673 "write_zeroes": true, 00:14:42.673 "zcopy": true, 00:14:42.673 "get_zone_info": false, 00:14:42.673 "zone_management": false, 00:14:42.673 "zone_append": false, 00:14:42.673 "compare": false, 00:14:42.673 "compare_and_write": false, 00:14:42.673 "abort": true, 00:14:42.673 "seek_hole": false, 00:14:42.673 "seek_data": false, 00:14:42.673 "copy": true, 00:14:42.673 "nvme_iov_md": false 00:14:42.673 }, 00:14:42.673 "memory_domains": [ 00:14:42.673 { 00:14:42.673 "dma_device_id": "system", 00:14:42.673 "dma_device_type": 1 00:14:42.673 }, 00:14:42.673 { 00:14:42.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.673 "dma_device_type": 2 00:14:42.673 } 00:14:42.673 ], 00:14:42.673 "driver_specific": {} 00:14:42.673 } 00:14:42.673 ] 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.673 "name": "Existed_Raid", 00:14:42.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.673 "strip_size_kb": 64, 00:14:42.673 "state": "configuring", 00:14:42.673 "raid_level": "raid5f", 00:14:42.673 "superblock": false, 00:14:42.673 "num_base_bdevs": 3, 00:14:42.673 "num_base_bdevs_discovered": 2, 00:14:42.673 "num_base_bdevs_operational": 3, 00:14:42.673 "base_bdevs_list": [ 00:14:42.673 { 00:14:42.673 "name": "BaseBdev1", 00:14:42.673 "uuid": "45315b6a-93e4-455f-b6ba-619231714acb", 00:14:42.673 "is_configured": true, 00:14:42.673 "data_offset": 0, 00:14:42.673 "data_size": 65536 00:14:42.673 }, 00:14:42.673 { 00:14:42.673 "name": "BaseBdev2", 00:14:42.673 "uuid": "0921832e-29e1-4a7c-b7b6-701f7b3c9808", 00:14:42.673 "is_configured": true, 00:14:42.673 "data_offset": 0, 00:14:42.673 "data_size": 65536 00:14:42.673 }, 00:14:42.673 { 00:14:42.673 "name": "BaseBdev3", 00:14:42.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.673 "is_configured": false, 00:14:42.673 "data_offset": 0, 00:14:42.673 "data_size": 0 00:14:42.673 } 00:14:42.673 ] 00:14:42.673 }' 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.673 20:27:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.933 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:42.933 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.933 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.934 [2024-11-26 20:27:36.416804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.934 [2024-11-26 20:27:36.416902] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:42.934 [2024-11-26 20:27:36.416923] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:42.934 [2024-11-26 20:27:36.417225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:42.934 [2024-11-26 20:27:36.417789] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:42.934 [2024-11-26 20:27:36.417813] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:42.934 [2024-11-26 20:27:36.418049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.934 BaseBdev3 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.934 [ 00:14:42.934 { 00:14:42.934 "name": "BaseBdev3", 00:14:42.934 "aliases": [ 00:14:42.934 "3231367c-87aa-459f-9f50-c86711986911" 00:14:42.934 ], 00:14:42.934 "product_name": "Malloc disk", 00:14:42.934 "block_size": 512, 00:14:42.934 "num_blocks": 65536, 00:14:42.934 "uuid": "3231367c-87aa-459f-9f50-c86711986911", 00:14:42.934 "assigned_rate_limits": { 00:14:42.934 "rw_ios_per_sec": 0, 00:14:42.934 "rw_mbytes_per_sec": 0, 00:14:42.934 "r_mbytes_per_sec": 0, 00:14:42.934 "w_mbytes_per_sec": 0 00:14:42.934 }, 00:14:42.934 "claimed": true, 00:14:42.934 "claim_type": "exclusive_write", 00:14:42.934 "zoned": false, 00:14:42.934 "supported_io_types": { 00:14:42.934 "read": true, 00:14:42.934 "write": true, 00:14:42.934 "unmap": true, 00:14:42.934 "flush": true, 00:14:42.934 "reset": true, 00:14:42.934 "nvme_admin": false, 00:14:42.934 "nvme_io": false, 00:14:42.934 "nvme_io_md": false, 00:14:42.934 "write_zeroes": true, 00:14:42.934 "zcopy": true, 00:14:42.934 "get_zone_info": false, 00:14:42.934 "zone_management": false, 00:14:42.934 "zone_append": false, 00:14:42.934 "compare": false, 00:14:42.934 "compare_and_write": false, 00:14:42.934 "abort": true, 00:14:42.934 "seek_hole": false, 00:14:42.934 "seek_data": false, 00:14:42.934 "copy": true, 00:14:42.934 "nvme_iov_md": false 00:14:42.934 }, 00:14:42.934 "memory_domains": [ 00:14:42.934 { 00:14:42.934 "dma_device_id": "system", 00:14:42.934 "dma_device_type": 1 00:14:42.934 }, 00:14:42.934 { 00:14:42.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.934 "dma_device_type": 2 00:14:42.934 } 00:14:42.934 ], 00:14:42.934 "driver_specific": {} 00:14:42.934 } 00:14:42.934 ] 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.934 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.193 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.193 "name": "Existed_Raid", 00:14:43.193 "uuid": "d38b6e86-ff52-4774-b18c-f08ffe19bad3", 00:14:43.193 "strip_size_kb": 64, 00:14:43.193 "state": "online", 00:14:43.193 "raid_level": "raid5f", 00:14:43.193 "superblock": false, 00:14:43.193 "num_base_bdevs": 3, 00:14:43.193 "num_base_bdevs_discovered": 3, 00:14:43.193 "num_base_bdevs_operational": 3, 00:14:43.193 "base_bdevs_list": [ 00:14:43.193 { 00:14:43.193 "name": "BaseBdev1", 00:14:43.193 "uuid": "45315b6a-93e4-455f-b6ba-619231714acb", 00:14:43.193 "is_configured": true, 00:14:43.193 "data_offset": 0, 00:14:43.193 "data_size": 65536 00:14:43.193 }, 00:14:43.193 { 00:14:43.193 "name": "BaseBdev2", 00:14:43.193 "uuid": "0921832e-29e1-4a7c-b7b6-701f7b3c9808", 00:14:43.193 "is_configured": true, 00:14:43.193 "data_offset": 0, 00:14:43.193 "data_size": 65536 00:14:43.193 }, 00:14:43.193 { 00:14:43.193 "name": "BaseBdev3", 00:14:43.193 "uuid": "3231367c-87aa-459f-9f50-c86711986911", 00:14:43.193 "is_configured": true, 00:14:43.193 "data_offset": 0, 00:14:43.193 "data_size": 65536 00:14:43.193 } 00:14:43.193 ] 00:14:43.193 }' 00:14:43.193 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.193 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.453 [2024-11-26 20:27:36.932214] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.453 "name": "Existed_Raid", 00:14:43.453 "aliases": [ 00:14:43.453 "d38b6e86-ff52-4774-b18c-f08ffe19bad3" 00:14:43.453 ], 00:14:43.453 "product_name": "Raid Volume", 00:14:43.453 "block_size": 512, 00:14:43.453 "num_blocks": 131072, 00:14:43.453 "uuid": "d38b6e86-ff52-4774-b18c-f08ffe19bad3", 00:14:43.453 "assigned_rate_limits": { 00:14:43.453 "rw_ios_per_sec": 0, 00:14:43.453 "rw_mbytes_per_sec": 0, 00:14:43.453 "r_mbytes_per_sec": 0, 00:14:43.453 "w_mbytes_per_sec": 0 00:14:43.453 }, 00:14:43.453 "claimed": false, 00:14:43.453 "zoned": false, 00:14:43.453 "supported_io_types": { 00:14:43.453 "read": true, 00:14:43.453 "write": true, 00:14:43.453 "unmap": false, 00:14:43.453 "flush": false, 00:14:43.453 "reset": true, 00:14:43.453 "nvme_admin": false, 00:14:43.453 "nvme_io": false, 00:14:43.453 "nvme_io_md": false, 00:14:43.453 "write_zeroes": true, 00:14:43.453 "zcopy": false, 00:14:43.453 "get_zone_info": false, 00:14:43.453 "zone_management": false, 00:14:43.453 "zone_append": false, 00:14:43.453 "compare": false, 00:14:43.453 "compare_and_write": false, 00:14:43.453 "abort": false, 00:14:43.453 "seek_hole": false, 00:14:43.453 "seek_data": false, 00:14:43.453 "copy": false, 00:14:43.453 "nvme_iov_md": false 00:14:43.453 }, 00:14:43.453 "driver_specific": { 00:14:43.453 "raid": { 00:14:43.453 "uuid": "d38b6e86-ff52-4774-b18c-f08ffe19bad3", 00:14:43.453 "strip_size_kb": 64, 00:14:43.453 "state": "online", 00:14:43.453 "raid_level": "raid5f", 00:14:43.453 "superblock": false, 00:14:43.453 "num_base_bdevs": 3, 00:14:43.453 "num_base_bdevs_discovered": 3, 00:14:43.453 "num_base_bdevs_operational": 3, 00:14:43.453 "base_bdevs_list": [ 00:14:43.453 { 00:14:43.453 "name": "BaseBdev1", 00:14:43.453 "uuid": "45315b6a-93e4-455f-b6ba-619231714acb", 00:14:43.453 "is_configured": true, 00:14:43.453 "data_offset": 0, 00:14:43.453 "data_size": 65536 00:14:43.453 }, 00:14:43.453 { 00:14:43.453 "name": "BaseBdev2", 00:14:43.453 "uuid": "0921832e-29e1-4a7c-b7b6-701f7b3c9808", 00:14:43.453 "is_configured": true, 00:14:43.453 "data_offset": 0, 00:14:43.453 "data_size": 65536 00:14:43.453 }, 00:14:43.453 { 00:14:43.453 "name": "BaseBdev3", 00:14:43.453 "uuid": "3231367c-87aa-459f-9f50-c86711986911", 00:14:43.453 "is_configured": true, 00:14:43.453 "data_offset": 0, 00:14:43.453 "data_size": 65536 00:14:43.453 } 00:14:43.453 ] 00:14:43.453 } 00:14:43.453 } 00:14:43.453 }' 00:14:43.453 20:27:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:43.713 BaseBdev2 00:14:43.713 BaseBdev3' 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.713 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 [2024-11-26 20:27:37.223588] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.973 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.973 "name": "Existed_Raid", 00:14:43.973 "uuid": "d38b6e86-ff52-4774-b18c-f08ffe19bad3", 00:14:43.973 "strip_size_kb": 64, 00:14:43.973 "state": "online", 00:14:43.973 "raid_level": "raid5f", 00:14:43.973 "superblock": false, 00:14:43.973 "num_base_bdevs": 3, 00:14:43.973 "num_base_bdevs_discovered": 2, 00:14:43.973 "num_base_bdevs_operational": 2, 00:14:43.973 "base_bdevs_list": [ 00:14:43.973 { 00:14:43.973 "name": null, 00:14:43.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.973 "is_configured": false, 00:14:43.973 "data_offset": 0, 00:14:43.973 "data_size": 65536 00:14:43.973 }, 00:14:43.973 { 00:14:43.973 "name": "BaseBdev2", 00:14:43.973 "uuid": "0921832e-29e1-4a7c-b7b6-701f7b3c9808", 00:14:43.973 "is_configured": true, 00:14:43.973 "data_offset": 0, 00:14:43.973 "data_size": 65536 00:14:43.973 }, 00:14:43.973 { 00:14:43.973 "name": "BaseBdev3", 00:14:43.973 "uuid": "3231367c-87aa-459f-9f50-c86711986911", 00:14:43.973 "is_configured": true, 00:14:43.973 "data_offset": 0, 00:14:43.973 "data_size": 65536 00:14:43.973 } 00:14:43.973 ] 00:14:43.973 }' 00:14:43.973 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.973 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.231 [2024-11-26 20:27:37.747307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.231 [2024-11-26 20:27:37.747416] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.231 [2024-11-26 20:27:37.768151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.231 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.490 [2024-11-26 20:27:37.828133] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.490 [2024-11-26 20:27:37.828200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:44.490 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.491 BaseBdev2 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.491 [ 00:14:44.491 { 00:14:44.491 "name": "BaseBdev2", 00:14:44.491 "aliases": [ 00:14:44.491 "44de06b3-9202-476b-9915-ccc55bda26fd" 00:14:44.491 ], 00:14:44.491 "product_name": "Malloc disk", 00:14:44.491 "block_size": 512, 00:14:44.491 "num_blocks": 65536, 00:14:44.491 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:44.491 "assigned_rate_limits": { 00:14:44.491 "rw_ios_per_sec": 0, 00:14:44.491 "rw_mbytes_per_sec": 0, 00:14:44.491 "r_mbytes_per_sec": 0, 00:14:44.491 "w_mbytes_per_sec": 0 00:14:44.491 }, 00:14:44.491 "claimed": false, 00:14:44.491 "zoned": false, 00:14:44.491 "supported_io_types": { 00:14:44.491 "read": true, 00:14:44.491 "write": true, 00:14:44.491 "unmap": true, 00:14:44.491 "flush": true, 00:14:44.491 "reset": true, 00:14:44.491 "nvme_admin": false, 00:14:44.491 "nvme_io": false, 00:14:44.491 "nvme_io_md": false, 00:14:44.491 "write_zeroes": true, 00:14:44.491 "zcopy": true, 00:14:44.491 "get_zone_info": false, 00:14:44.491 "zone_management": false, 00:14:44.491 "zone_append": false, 00:14:44.491 "compare": false, 00:14:44.491 "compare_and_write": false, 00:14:44.491 "abort": true, 00:14:44.491 "seek_hole": false, 00:14:44.491 "seek_data": false, 00:14:44.491 "copy": true, 00:14:44.491 "nvme_iov_md": false 00:14:44.491 }, 00:14:44.491 "memory_domains": [ 00:14:44.491 { 00:14:44.491 "dma_device_id": "system", 00:14:44.491 "dma_device_type": 1 00:14:44.491 }, 00:14:44.491 { 00:14:44.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.491 "dma_device_type": 2 00:14:44.491 } 00:14:44.491 ], 00:14:44.491 "driver_specific": {} 00:14:44.491 } 00:14:44.491 ] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.491 BaseBdev3 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.491 [ 00:14:44.491 { 00:14:44.491 "name": "BaseBdev3", 00:14:44.491 "aliases": [ 00:14:44.491 "ed2e8b8d-1022-45b5-85dd-7210626538a3" 00:14:44.491 ], 00:14:44.491 "product_name": "Malloc disk", 00:14:44.491 "block_size": 512, 00:14:44.491 "num_blocks": 65536, 00:14:44.491 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:44.491 "assigned_rate_limits": { 00:14:44.491 "rw_ios_per_sec": 0, 00:14:44.491 "rw_mbytes_per_sec": 0, 00:14:44.491 "r_mbytes_per_sec": 0, 00:14:44.491 "w_mbytes_per_sec": 0 00:14:44.491 }, 00:14:44.491 "claimed": false, 00:14:44.491 "zoned": false, 00:14:44.491 "supported_io_types": { 00:14:44.491 "read": true, 00:14:44.491 "write": true, 00:14:44.491 "unmap": true, 00:14:44.491 "flush": true, 00:14:44.491 "reset": true, 00:14:44.491 "nvme_admin": false, 00:14:44.491 "nvme_io": false, 00:14:44.491 "nvme_io_md": false, 00:14:44.491 "write_zeroes": true, 00:14:44.491 "zcopy": true, 00:14:44.491 "get_zone_info": false, 00:14:44.491 "zone_management": false, 00:14:44.491 "zone_append": false, 00:14:44.491 "compare": false, 00:14:44.491 "compare_and_write": false, 00:14:44.491 "abort": true, 00:14:44.491 "seek_hole": false, 00:14:44.491 "seek_data": false, 00:14:44.491 "copy": true, 00:14:44.491 "nvme_iov_md": false 00:14:44.491 }, 00:14:44.491 "memory_domains": [ 00:14:44.491 { 00:14:44.491 "dma_device_id": "system", 00:14:44.491 "dma_device_type": 1 00:14:44.491 }, 00:14:44.491 { 00:14:44.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.491 "dma_device_type": 2 00:14:44.491 } 00:14:44.491 ], 00:14:44.491 "driver_specific": {} 00:14:44.491 } 00:14:44.491 ] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.491 20:27:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.491 [2024-11-26 20:27:37.998441] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.491 [2024-11-26 20:27:37.998485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.491 [2024-11-26 20:27:37.998508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.491 [2024-11-26 20:27:38.000579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.491 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.750 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.750 "name": "Existed_Raid", 00:14:44.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.750 "strip_size_kb": 64, 00:14:44.750 "state": "configuring", 00:14:44.750 "raid_level": "raid5f", 00:14:44.750 "superblock": false, 00:14:44.750 "num_base_bdevs": 3, 00:14:44.750 "num_base_bdevs_discovered": 2, 00:14:44.750 "num_base_bdevs_operational": 3, 00:14:44.750 "base_bdevs_list": [ 00:14:44.750 { 00:14:44.750 "name": "BaseBdev1", 00:14:44.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.750 "is_configured": false, 00:14:44.750 "data_offset": 0, 00:14:44.750 "data_size": 0 00:14:44.750 }, 00:14:44.750 { 00:14:44.750 "name": "BaseBdev2", 00:14:44.750 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:44.750 "is_configured": true, 00:14:44.750 "data_offset": 0, 00:14:44.750 "data_size": 65536 00:14:44.750 }, 00:14:44.750 { 00:14:44.750 "name": "BaseBdev3", 00:14:44.750 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:44.750 "is_configured": true, 00:14:44.750 "data_offset": 0, 00:14:44.750 "data_size": 65536 00:14:44.750 } 00:14:44.750 ] 00:14:44.750 }' 00:14:44.750 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.750 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 [2024-11-26 20:27:38.469756] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.009 "name": "Existed_Raid", 00:14:45.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.009 "strip_size_kb": 64, 00:14:45.009 "state": "configuring", 00:14:45.009 "raid_level": "raid5f", 00:14:45.009 "superblock": false, 00:14:45.009 "num_base_bdevs": 3, 00:14:45.009 "num_base_bdevs_discovered": 1, 00:14:45.009 "num_base_bdevs_operational": 3, 00:14:45.009 "base_bdevs_list": [ 00:14:45.009 { 00:14:45.009 "name": "BaseBdev1", 00:14:45.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.009 "is_configured": false, 00:14:45.009 "data_offset": 0, 00:14:45.009 "data_size": 0 00:14:45.009 }, 00:14:45.009 { 00:14:45.009 "name": null, 00:14:45.009 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:45.009 "is_configured": false, 00:14:45.009 "data_offset": 0, 00:14:45.009 "data_size": 65536 00:14:45.009 }, 00:14:45.009 { 00:14:45.009 "name": "BaseBdev3", 00:14:45.009 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:45.009 "is_configured": true, 00:14:45.009 "data_offset": 0, 00:14:45.009 "data_size": 65536 00:14:45.009 } 00:14:45.009 ] 00:14:45.009 }' 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.009 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.578 [2024-11-26 20:27:38.968749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.578 BaseBdev1 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.578 [ 00:14:45.578 { 00:14:45.578 "name": "BaseBdev1", 00:14:45.578 "aliases": [ 00:14:45.578 "9c133156-5213-47c1-9120-8c9622e69d09" 00:14:45.578 ], 00:14:45.578 "product_name": "Malloc disk", 00:14:45.578 "block_size": 512, 00:14:45.578 "num_blocks": 65536, 00:14:45.578 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:45.578 "assigned_rate_limits": { 00:14:45.578 "rw_ios_per_sec": 0, 00:14:45.578 "rw_mbytes_per_sec": 0, 00:14:45.578 "r_mbytes_per_sec": 0, 00:14:45.578 "w_mbytes_per_sec": 0 00:14:45.578 }, 00:14:45.578 "claimed": true, 00:14:45.578 "claim_type": "exclusive_write", 00:14:45.578 "zoned": false, 00:14:45.578 "supported_io_types": { 00:14:45.578 "read": true, 00:14:45.578 "write": true, 00:14:45.578 "unmap": true, 00:14:45.578 "flush": true, 00:14:45.578 "reset": true, 00:14:45.578 "nvme_admin": false, 00:14:45.578 "nvme_io": false, 00:14:45.578 "nvme_io_md": false, 00:14:45.578 "write_zeroes": true, 00:14:45.578 "zcopy": true, 00:14:45.578 "get_zone_info": false, 00:14:45.578 "zone_management": false, 00:14:45.578 "zone_append": false, 00:14:45.578 "compare": false, 00:14:45.578 "compare_and_write": false, 00:14:45.578 "abort": true, 00:14:45.578 "seek_hole": false, 00:14:45.578 "seek_data": false, 00:14:45.578 "copy": true, 00:14:45.578 "nvme_iov_md": false 00:14:45.578 }, 00:14:45.578 "memory_domains": [ 00:14:45.578 { 00:14:45.578 "dma_device_id": "system", 00:14:45.578 "dma_device_type": 1 00:14:45.578 }, 00:14:45.578 { 00:14:45.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.578 "dma_device_type": 2 00:14:45.578 } 00:14:45.578 ], 00:14:45.578 "driver_specific": {} 00:14:45.578 } 00:14:45.578 ] 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.578 20:27:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.578 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.578 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.578 "name": "Existed_Raid", 00:14:45.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.578 "strip_size_kb": 64, 00:14:45.578 "state": "configuring", 00:14:45.578 "raid_level": "raid5f", 00:14:45.578 "superblock": false, 00:14:45.578 "num_base_bdevs": 3, 00:14:45.578 "num_base_bdevs_discovered": 2, 00:14:45.578 "num_base_bdevs_operational": 3, 00:14:45.578 "base_bdevs_list": [ 00:14:45.578 { 00:14:45.578 "name": "BaseBdev1", 00:14:45.579 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:45.579 "is_configured": true, 00:14:45.579 "data_offset": 0, 00:14:45.579 "data_size": 65536 00:14:45.579 }, 00:14:45.579 { 00:14:45.579 "name": null, 00:14:45.579 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:45.579 "is_configured": false, 00:14:45.579 "data_offset": 0, 00:14:45.579 "data_size": 65536 00:14:45.579 }, 00:14:45.579 { 00:14:45.579 "name": "BaseBdev3", 00:14:45.579 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:45.579 "is_configured": true, 00:14:45.579 "data_offset": 0, 00:14:45.579 "data_size": 65536 00:14:45.579 } 00:14:45.579 ] 00:14:45.579 }' 00:14:45.579 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.579 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.151 [2024-11-26 20:27:39.499944] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.151 "name": "Existed_Raid", 00:14:46.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.151 "strip_size_kb": 64, 00:14:46.151 "state": "configuring", 00:14:46.151 "raid_level": "raid5f", 00:14:46.151 "superblock": false, 00:14:46.151 "num_base_bdevs": 3, 00:14:46.151 "num_base_bdevs_discovered": 1, 00:14:46.151 "num_base_bdevs_operational": 3, 00:14:46.151 "base_bdevs_list": [ 00:14:46.151 { 00:14:46.151 "name": "BaseBdev1", 00:14:46.151 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:46.151 "is_configured": true, 00:14:46.151 "data_offset": 0, 00:14:46.151 "data_size": 65536 00:14:46.151 }, 00:14:46.151 { 00:14:46.151 "name": null, 00:14:46.151 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:46.151 "is_configured": false, 00:14:46.151 "data_offset": 0, 00:14:46.151 "data_size": 65536 00:14:46.151 }, 00:14:46.151 { 00:14:46.151 "name": null, 00:14:46.151 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:46.151 "is_configured": false, 00:14:46.151 "data_offset": 0, 00:14:46.151 "data_size": 65536 00:14:46.151 } 00:14:46.151 ] 00:14:46.151 }' 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.151 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.411 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.411 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.411 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.411 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.411 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.670 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:46.670 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:46.670 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.670 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.670 [2024-11-26 20:27:39.987139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.670 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.671 20:27:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.671 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.671 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.671 "name": "Existed_Raid", 00:14:46.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.671 "strip_size_kb": 64, 00:14:46.671 "state": "configuring", 00:14:46.671 "raid_level": "raid5f", 00:14:46.671 "superblock": false, 00:14:46.671 "num_base_bdevs": 3, 00:14:46.671 "num_base_bdevs_discovered": 2, 00:14:46.671 "num_base_bdevs_operational": 3, 00:14:46.671 "base_bdevs_list": [ 00:14:46.671 { 00:14:46.671 "name": "BaseBdev1", 00:14:46.671 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:46.671 "is_configured": true, 00:14:46.671 "data_offset": 0, 00:14:46.671 "data_size": 65536 00:14:46.671 }, 00:14:46.671 { 00:14:46.671 "name": null, 00:14:46.671 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:46.671 "is_configured": false, 00:14:46.671 "data_offset": 0, 00:14:46.671 "data_size": 65536 00:14:46.671 }, 00:14:46.671 { 00:14:46.671 "name": "BaseBdev3", 00:14:46.671 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:46.671 "is_configured": true, 00:14:46.671 "data_offset": 0, 00:14:46.671 "data_size": 65536 00:14:46.671 } 00:14:46.671 ] 00:14:46.671 }' 00:14:46.671 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.671 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.930 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.930 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.930 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.930 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.930 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.190 [2024-11-26 20:27:40.490344] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.190 "name": "Existed_Raid", 00:14:47.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.190 "strip_size_kb": 64, 00:14:47.190 "state": "configuring", 00:14:47.190 "raid_level": "raid5f", 00:14:47.190 "superblock": false, 00:14:47.190 "num_base_bdevs": 3, 00:14:47.190 "num_base_bdevs_discovered": 1, 00:14:47.190 "num_base_bdevs_operational": 3, 00:14:47.190 "base_bdevs_list": [ 00:14:47.190 { 00:14:47.190 "name": null, 00:14:47.190 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:47.190 "is_configured": false, 00:14:47.190 "data_offset": 0, 00:14:47.190 "data_size": 65536 00:14:47.190 }, 00:14:47.190 { 00:14:47.190 "name": null, 00:14:47.190 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:47.190 "is_configured": false, 00:14:47.190 "data_offset": 0, 00:14:47.190 "data_size": 65536 00:14:47.190 }, 00:14:47.190 { 00:14:47.190 "name": "BaseBdev3", 00:14:47.190 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:47.190 "is_configured": true, 00:14:47.190 "data_offset": 0, 00:14:47.190 "data_size": 65536 00:14:47.190 } 00:14:47.190 ] 00:14:47.190 }' 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.190 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.449 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.449 20:27:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:47.449 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.449 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.449 20:27:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.709 [2024-11-26 20:27:41.009528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.709 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.709 "name": "Existed_Raid", 00:14:47.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.709 "strip_size_kb": 64, 00:14:47.709 "state": "configuring", 00:14:47.709 "raid_level": "raid5f", 00:14:47.709 "superblock": false, 00:14:47.709 "num_base_bdevs": 3, 00:14:47.710 "num_base_bdevs_discovered": 2, 00:14:47.710 "num_base_bdevs_operational": 3, 00:14:47.710 "base_bdevs_list": [ 00:14:47.710 { 00:14:47.710 "name": null, 00:14:47.710 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:47.710 "is_configured": false, 00:14:47.710 "data_offset": 0, 00:14:47.710 "data_size": 65536 00:14:47.710 }, 00:14:47.710 { 00:14:47.710 "name": "BaseBdev2", 00:14:47.710 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:47.710 "is_configured": true, 00:14:47.710 "data_offset": 0, 00:14:47.710 "data_size": 65536 00:14:47.710 }, 00:14:47.710 { 00:14:47.710 "name": "BaseBdev3", 00:14:47.710 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:47.710 "is_configured": true, 00:14:47.710 "data_offset": 0, 00:14:47.710 "data_size": 65536 00:14:47.710 } 00:14:47.710 ] 00:14:47.710 }' 00:14:47.710 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.710 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:47.970 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9c133156-5213-47c1-9120-8c9622e69d09 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.230 [2024-11-26 20:27:41.568309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:48.230 [2024-11-26 20:27:41.568362] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:48.230 [2024-11-26 20:27:41.568373] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.230 [2024-11-26 20:27:41.568660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:48.230 [2024-11-26 20:27:41.569114] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:48.230 [2024-11-26 20:27:41.569135] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:48.230 [2024-11-26 20:27:41.569347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.230 NewBaseBdev 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.230 [ 00:14:48.230 { 00:14:48.230 "name": "NewBaseBdev", 00:14:48.230 "aliases": [ 00:14:48.230 "9c133156-5213-47c1-9120-8c9622e69d09" 00:14:48.230 ], 00:14:48.230 "product_name": "Malloc disk", 00:14:48.230 "block_size": 512, 00:14:48.230 "num_blocks": 65536, 00:14:48.230 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:48.230 "assigned_rate_limits": { 00:14:48.230 "rw_ios_per_sec": 0, 00:14:48.230 "rw_mbytes_per_sec": 0, 00:14:48.230 "r_mbytes_per_sec": 0, 00:14:48.230 "w_mbytes_per_sec": 0 00:14:48.230 }, 00:14:48.230 "claimed": true, 00:14:48.230 "claim_type": "exclusive_write", 00:14:48.230 "zoned": false, 00:14:48.230 "supported_io_types": { 00:14:48.230 "read": true, 00:14:48.230 "write": true, 00:14:48.230 "unmap": true, 00:14:48.230 "flush": true, 00:14:48.230 "reset": true, 00:14:48.230 "nvme_admin": false, 00:14:48.230 "nvme_io": false, 00:14:48.230 "nvme_io_md": false, 00:14:48.230 "write_zeroes": true, 00:14:48.230 "zcopy": true, 00:14:48.230 "get_zone_info": false, 00:14:48.230 "zone_management": false, 00:14:48.230 "zone_append": false, 00:14:48.230 "compare": false, 00:14:48.230 "compare_and_write": false, 00:14:48.230 "abort": true, 00:14:48.230 "seek_hole": false, 00:14:48.230 "seek_data": false, 00:14:48.230 "copy": true, 00:14:48.230 "nvme_iov_md": false 00:14:48.230 }, 00:14:48.230 "memory_domains": [ 00:14:48.230 { 00:14:48.230 "dma_device_id": "system", 00:14:48.230 "dma_device_type": 1 00:14:48.230 }, 00:14:48.230 { 00:14:48.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.230 "dma_device_type": 2 00:14:48.230 } 00:14:48.230 ], 00:14:48.230 "driver_specific": {} 00:14:48.230 } 00:14:48.230 ] 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.230 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.230 "name": "Existed_Raid", 00:14:48.230 "uuid": "a69816d9-8d25-4956-a1fb-b58338387750", 00:14:48.230 "strip_size_kb": 64, 00:14:48.230 "state": "online", 00:14:48.231 "raid_level": "raid5f", 00:14:48.231 "superblock": false, 00:14:48.231 "num_base_bdevs": 3, 00:14:48.231 "num_base_bdevs_discovered": 3, 00:14:48.231 "num_base_bdevs_operational": 3, 00:14:48.231 "base_bdevs_list": [ 00:14:48.231 { 00:14:48.231 "name": "NewBaseBdev", 00:14:48.231 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:48.231 "is_configured": true, 00:14:48.231 "data_offset": 0, 00:14:48.231 "data_size": 65536 00:14:48.231 }, 00:14:48.231 { 00:14:48.231 "name": "BaseBdev2", 00:14:48.231 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:48.231 "is_configured": true, 00:14:48.231 "data_offset": 0, 00:14:48.231 "data_size": 65536 00:14:48.231 }, 00:14:48.231 { 00:14:48.231 "name": "BaseBdev3", 00:14:48.231 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:48.231 "is_configured": true, 00:14:48.231 "data_offset": 0, 00:14:48.231 "data_size": 65536 00:14:48.231 } 00:14:48.231 ] 00:14:48.231 }' 00:14:48.231 20:27:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.231 20:27:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.490 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:48.490 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:48.490 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:48.490 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:48.490 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:48.491 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:48.491 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:48.491 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.491 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.491 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:48.491 [2024-11-26 20:27:42.031800] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.750 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:48.751 "name": "Existed_Raid", 00:14:48.751 "aliases": [ 00:14:48.751 "a69816d9-8d25-4956-a1fb-b58338387750" 00:14:48.751 ], 00:14:48.751 "product_name": "Raid Volume", 00:14:48.751 "block_size": 512, 00:14:48.751 "num_blocks": 131072, 00:14:48.751 "uuid": "a69816d9-8d25-4956-a1fb-b58338387750", 00:14:48.751 "assigned_rate_limits": { 00:14:48.751 "rw_ios_per_sec": 0, 00:14:48.751 "rw_mbytes_per_sec": 0, 00:14:48.751 "r_mbytes_per_sec": 0, 00:14:48.751 "w_mbytes_per_sec": 0 00:14:48.751 }, 00:14:48.751 "claimed": false, 00:14:48.751 "zoned": false, 00:14:48.751 "supported_io_types": { 00:14:48.751 "read": true, 00:14:48.751 "write": true, 00:14:48.751 "unmap": false, 00:14:48.751 "flush": false, 00:14:48.751 "reset": true, 00:14:48.751 "nvme_admin": false, 00:14:48.751 "nvme_io": false, 00:14:48.751 "nvme_io_md": false, 00:14:48.751 "write_zeroes": true, 00:14:48.751 "zcopy": false, 00:14:48.751 "get_zone_info": false, 00:14:48.751 "zone_management": false, 00:14:48.751 "zone_append": false, 00:14:48.751 "compare": false, 00:14:48.751 "compare_and_write": false, 00:14:48.751 "abort": false, 00:14:48.751 "seek_hole": false, 00:14:48.751 "seek_data": false, 00:14:48.751 "copy": false, 00:14:48.751 "nvme_iov_md": false 00:14:48.751 }, 00:14:48.751 "driver_specific": { 00:14:48.751 "raid": { 00:14:48.751 "uuid": "a69816d9-8d25-4956-a1fb-b58338387750", 00:14:48.751 "strip_size_kb": 64, 00:14:48.751 "state": "online", 00:14:48.751 "raid_level": "raid5f", 00:14:48.751 "superblock": false, 00:14:48.751 "num_base_bdevs": 3, 00:14:48.751 "num_base_bdevs_discovered": 3, 00:14:48.751 "num_base_bdevs_operational": 3, 00:14:48.751 "base_bdevs_list": [ 00:14:48.751 { 00:14:48.751 "name": "NewBaseBdev", 00:14:48.751 "uuid": "9c133156-5213-47c1-9120-8c9622e69d09", 00:14:48.751 "is_configured": true, 00:14:48.751 "data_offset": 0, 00:14:48.751 "data_size": 65536 00:14:48.751 }, 00:14:48.751 { 00:14:48.751 "name": "BaseBdev2", 00:14:48.751 "uuid": "44de06b3-9202-476b-9915-ccc55bda26fd", 00:14:48.751 "is_configured": true, 00:14:48.751 "data_offset": 0, 00:14:48.751 "data_size": 65536 00:14:48.751 }, 00:14:48.751 { 00:14:48.751 "name": "BaseBdev3", 00:14:48.751 "uuid": "ed2e8b8d-1022-45b5-85dd-7210626538a3", 00:14:48.751 "is_configured": true, 00:14:48.751 "data_offset": 0, 00:14:48.751 "data_size": 65536 00:14:48.751 } 00:14:48.751 ] 00:14:48.751 } 00:14:48.751 } 00:14:48.751 }' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:48.751 BaseBdev2 00:14:48.751 BaseBdev3' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.751 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.011 [2024-11-26 20:27:42.331107] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.011 [2024-11-26 20:27:42.331144] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.011 [2024-11-26 20:27:42.331252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.011 [2024-11-26 20:27:42.331524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.011 [2024-11-26 20:27:42.331569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 91029 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 91029 ']' 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 91029 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91029 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.011 killing process with pid 91029 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91029' 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 91029 00:14:49.011 [2024-11-26 20:27:42.374662] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.011 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 91029 00:14:49.011 [2024-11-26 20:27:42.426156] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.271 20:27:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:49.271 00:14:49.271 real 0m9.415s 00:14:49.271 user 0m15.904s 00:14:49.271 sys 0m1.956s 00:14:49.271 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:49.271 20:27:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.271 ************************************ 00:14:49.271 END TEST raid5f_state_function_test 00:14:49.271 ************************************ 00:14:49.531 20:27:42 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:49.531 20:27:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:49.531 20:27:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:49.531 20:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.531 ************************************ 00:14:49.531 START TEST raid5f_state_function_test_sb 00:14:49.531 ************************************ 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91634 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:49.531 Process raid pid: 91634 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91634' 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91634 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91634 ']' 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.531 20:27:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.531 [2024-11-26 20:27:42.935675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:49.531 [2024-11-26 20:27:42.935809] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.790 [2024-11-26 20:27:43.097990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.790 [2024-11-26 20:27:43.176772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.790 [2024-11-26 20:27:43.247905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.790 [2024-11-26 20:27:43.247945] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.363 [2024-11-26 20:27:43.809138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.363 [2024-11-26 20:27:43.809193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.363 [2024-11-26 20:27:43.809230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.363 [2024-11-26 20:27:43.809241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.363 [2024-11-26 20:27:43.809247] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.363 [2024-11-26 20:27:43.809259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.363 "name": "Existed_Raid", 00:14:50.363 "uuid": "ac04d324-489e-4092-b56f-2b3d22656e22", 00:14:50.363 "strip_size_kb": 64, 00:14:50.363 "state": "configuring", 00:14:50.363 "raid_level": "raid5f", 00:14:50.363 "superblock": true, 00:14:50.363 "num_base_bdevs": 3, 00:14:50.363 "num_base_bdevs_discovered": 0, 00:14:50.363 "num_base_bdevs_operational": 3, 00:14:50.363 "base_bdevs_list": [ 00:14:50.363 { 00:14:50.363 "name": "BaseBdev1", 00:14:50.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.363 "is_configured": false, 00:14:50.363 "data_offset": 0, 00:14:50.363 "data_size": 0 00:14:50.363 }, 00:14:50.363 { 00:14:50.363 "name": "BaseBdev2", 00:14:50.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.363 "is_configured": false, 00:14:50.363 "data_offset": 0, 00:14:50.363 "data_size": 0 00:14:50.363 }, 00:14:50.363 { 00:14:50.363 "name": "BaseBdev3", 00:14:50.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.363 "is_configured": false, 00:14:50.363 "data_offset": 0, 00:14:50.363 "data_size": 0 00:14:50.363 } 00:14:50.363 ] 00:14:50.363 }' 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.363 20:27:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 [2024-11-26 20:27:44.216363] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.941 [2024-11-26 20:27:44.216409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 [2024-11-26 20:27:44.228389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.941 [2024-11-26 20:27:44.228431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.941 [2024-11-26 20:27:44.228440] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.941 [2024-11-26 20:27:44.228450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.941 [2024-11-26 20:27:44.228456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.941 [2024-11-26 20:27:44.228465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 [2024-11-26 20:27:44.254495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.941 BaseBdev1 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 [ 00:14:50.941 { 00:14:50.941 "name": "BaseBdev1", 00:14:50.941 "aliases": [ 00:14:50.941 "5fdc79cf-1d40-482e-9955-cf614e035dd5" 00:14:50.941 ], 00:14:50.941 "product_name": "Malloc disk", 00:14:50.941 "block_size": 512, 00:14:50.941 "num_blocks": 65536, 00:14:50.941 "uuid": "5fdc79cf-1d40-482e-9955-cf614e035dd5", 00:14:50.941 "assigned_rate_limits": { 00:14:50.941 "rw_ios_per_sec": 0, 00:14:50.941 "rw_mbytes_per_sec": 0, 00:14:50.941 "r_mbytes_per_sec": 0, 00:14:50.941 "w_mbytes_per_sec": 0 00:14:50.941 }, 00:14:50.941 "claimed": true, 00:14:50.941 "claim_type": "exclusive_write", 00:14:50.941 "zoned": false, 00:14:50.941 "supported_io_types": { 00:14:50.941 "read": true, 00:14:50.941 "write": true, 00:14:50.941 "unmap": true, 00:14:50.941 "flush": true, 00:14:50.941 "reset": true, 00:14:50.941 "nvme_admin": false, 00:14:50.941 "nvme_io": false, 00:14:50.941 "nvme_io_md": false, 00:14:50.941 "write_zeroes": true, 00:14:50.941 "zcopy": true, 00:14:50.941 "get_zone_info": false, 00:14:50.941 "zone_management": false, 00:14:50.941 "zone_append": false, 00:14:50.941 "compare": false, 00:14:50.941 "compare_and_write": false, 00:14:50.941 "abort": true, 00:14:50.941 "seek_hole": false, 00:14:50.941 "seek_data": false, 00:14:50.941 "copy": true, 00:14:50.941 "nvme_iov_md": false 00:14:50.941 }, 00:14:50.941 "memory_domains": [ 00:14:50.941 { 00:14:50.941 "dma_device_id": "system", 00:14:50.941 "dma_device_type": 1 00:14:50.941 }, 00:14:50.941 { 00:14:50.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.941 "dma_device_type": 2 00:14:50.941 } 00:14:50.941 ], 00:14:50.941 "driver_specific": {} 00:14:50.941 } 00:14:50.941 ] 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.941 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.941 "name": "Existed_Raid", 00:14:50.941 "uuid": "d81e9694-1570-46a3-b493-116ffdac284b", 00:14:50.941 "strip_size_kb": 64, 00:14:50.941 "state": "configuring", 00:14:50.941 "raid_level": "raid5f", 00:14:50.941 "superblock": true, 00:14:50.941 "num_base_bdevs": 3, 00:14:50.941 "num_base_bdevs_discovered": 1, 00:14:50.941 "num_base_bdevs_operational": 3, 00:14:50.941 "base_bdevs_list": [ 00:14:50.941 { 00:14:50.941 "name": "BaseBdev1", 00:14:50.941 "uuid": "5fdc79cf-1d40-482e-9955-cf614e035dd5", 00:14:50.941 "is_configured": true, 00:14:50.941 "data_offset": 2048, 00:14:50.941 "data_size": 63488 00:14:50.941 }, 00:14:50.941 { 00:14:50.941 "name": "BaseBdev2", 00:14:50.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.941 "is_configured": false, 00:14:50.941 "data_offset": 0, 00:14:50.941 "data_size": 0 00:14:50.941 }, 00:14:50.941 { 00:14:50.941 "name": "BaseBdev3", 00:14:50.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.941 "is_configured": false, 00:14:50.942 "data_offset": 0, 00:14:50.942 "data_size": 0 00:14:50.942 } 00:14:50.942 ] 00:14:50.942 }' 00:14:50.942 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.942 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.202 [2024-11-26 20:27:44.733768] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.202 [2024-11-26 20:27:44.733830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.202 [2024-11-26 20:27:44.745791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.202 [2024-11-26 20:27:44.747767] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.202 [2024-11-26 20:27:44.747808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.202 [2024-11-26 20:27:44.747833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.202 [2024-11-26 20:27:44.747844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.202 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.461 "name": "Existed_Raid", 00:14:51.461 "uuid": "3374db8d-4ec7-4a4b-b794-739c3139d375", 00:14:51.461 "strip_size_kb": 64, 00:14:51.461 "state": "configuring", 00:14:51.461 "raid_level": "raid5f", 00:14:51.461 "superblock": true, 00:14:51.461 "num_base_bdevs": 3, 00:14:51.461 "num_base_bdevs_discovered": 1, 00:14:51.461 "num_base_bdevs_operational": 3, 00:14:51.461 "base_bdevs_list": [ 00:14:51.461 { 00:14:51.461 "name": "BaseBdev1", 00:14:51.461 "uuid": "5fdc79cf-1d40-482e-9955-cf614e035dd5", 00:14:51.461 "is_configured": true, 00:14:51.461 "data_offset": 2048, 00:14:51.461 "data_size": 63488 00:14:51.461 }, 00:14:51.461 { 00:14:51.461 "name": "BaseBdev2", 00:14:51.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.461 "is_configured": false, 00:14:51.461 "data_offset": 0, 00:14:51.461 "data_size": 0 00:14:51.461 }, 00:14:51.461 { 00:14:51.461 "name": "BaseBdev3", 00:14:51.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.461 "is_configured": false, 00:14:51.461 "data_offset": 0, 00:14:51.461 "data_size": 0 00:14:51.461 } 00:14:51.461 ] 00:14:51.461 }' 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.461 20:27:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.721 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.721 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.721 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.981 [2024-11-26 20:27:45.275581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.981 BaseBdev2 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.981 [ 00:14:51.981 { 00:14:51.981 "name": "BaseBdev2", 00:14:51.981 "aliases": [ 00:14:51.981 "19608055-fb58-4886-ab1f-d88aef5ea4fb" 00:14:51.981 ], 00:14:51.981 "product_name": "Malloc disk", 00:14:51.981 "block_size": 512, 00:14:51.981 "num_blocks": 65536, 00:14:51.981 "uuid": "19608055-fb58-4886-ab1f-d88aef5ea4fb", 00:14:51.981 "assigned_rate_limits": { 00:14:51.981 "rw_ios_per_sec": 0, 00:14:51.981 "rw_mbytes_per_sec": 0, 00:14:51.981 "r_mbytes_per_sec": 0, 00:14:51.981 "w_mbytes_per_sec": 0 00:14:51.981 }, 00:14:51.981 "claimed": true, 00:14:51.981 "claim_type": "exclusive_write", 00:14:51.981 "zoned": false, 00:14:51.981 "supported_io_types": { 00:14:51.981 "read": true, 00:14:51.981 "write": true, 00:14:51.981 "unmap": true, 00:14:51.981 "flush": true, 00:14:51.981 "reset": true, 00:14:51.981 "nvme_admin": false, 00:14:51.981 "nvme_io": false, 00:14:51.981 "nvme_io_md": false, 00:14:51.981 "write_zeroes": true, 00:14:51.981 "zcopy": true, 00:14:51.981 "get_zone_info": false, 00:14:51.981 "zone_management": false, 00:14:51.981 "zone_append": false, 00:14:51.981 "compare": false, 00:14:51.981 "compare_and_write": false, 00:14:51.981 "abort": true, 00:14:51.981 "seek_hole": false, 00:14:51.981 "seek_data": false, 00:14:51.981 "copy": true, 00:14:51.981 "nvme_iov_md": false 00:14:51.981 }, 00:14:51.981 "memory_domains": [ 00:14:51.981 { 00:14:51.981 "dma_device_id": "system", 00:14:51.981 "dma_device_type": 1 00:14:51.981 }, 00:14:51.981 { 00:14:51.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.981 "dma_device_type": 2 00:14:51.981 } 00:14:51.981 ], 00:14:51.981 "driver_specific": {} 00:14:51.981 } 00:14:51.981 ] 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.981 "name": "Existed_Raid", 00:14:51.981 "uuid": "3374db8d-4ec7-4a4b-b794-739c3139d375", 00:14:51.981 "strip_size_kb": 64, 00:14:51.981 "state": "configuring", 00:14:51.981 "raid_level": "raid5f", 00:14:51.981 "superblock": true, 00:14:51.981 "num_base_bdevs": 3, 00:14:51.981 "num_base_bdevs_discovered": 2, 00:14:51.981 "num_base_bdevs_operational": 3, 00:14:51.981 "base_bdevs_list": [ 00:14:51.981 { 00:14:51.981 "name": "BaseBdev1", 00:14:51.981 "uuid": "5fdc79cf-1d40-482e-9955-cf614e035dd5", 00:14:51.981 "is_configured": true, 00:14:51.981 "data_offset": 2048, 00:14:51.981 "data_size": 63488 00:14:51.981 }, 00:14:51.981 { 00:14:51.981 "name": "BaseBdev2", 00:14:51.981 "uuid": "19608055-fb58-4886-ab1f-d88aef5ea4fb", 00:14:51.981 "is_configured": true, 00:14:51.981 "data_offset": 2048, 00:14:51.981 "data_size": 63488 00:14:51.981 }, 00:14:51.981 { 00:14:51.981 "name": "BaseBdev3", 00:14:51.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.981 "is_configured": false, 00:14:51.981 "data_offset": 0, 00:14:51.981 "data_size": 0 00:14:51.981 } 00:14:51.981 ] 00:14:51.981 }' 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.981 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.241 [2024-11-26 20:27:45.754611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.241 [2024-11-26 20:27:45.754876] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:52.241 [2024-11-26 20:27:45.754906] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.241 BaseBdev3 00:14:52.241 [2024-11-26 20:27:45.755226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:52.241 [2024-11-26 20:27:45.755716] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:52.241 [2024-11-26 20:27:45.755739] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.241 [2024-11-26 20:27:45.755902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.241 [ 00:14:52.241 { 00:14:52.241 "name": "BaseBdev3", 00:14:52.241 "aliases": [ 00:14:52.241 "5a78f49e-8049-4d52-928f-6cfbda5c7373" 00:14:52.241 ], 00:14:52.241 "product_name": "Malloc disk", 00:14:52.241 "block_size": 512, 00:14:52.241 "num_blocks": 65536, 00:14:52.241 "uuid": "5a78f49e-8049-4d52-928f-6cfbda5c7373", 00:14:52.241 "assigned_rate_limits": { 00:14:52.241 "rw_ios_per_sec": 0, 00:14:52.241 "rw_mbytes_per_sec": 0, 00:14:52.241 "r_mbytes_per_sec": 0, 00:14:52.241 "w_mbytes_per_sec": 0 00:14:52.241 }, 00:14:52.241 "claimed": true, 00:14:52.241 "claim_type": "exclusive_write", 00:14:52.241 "zoned": false, 00:14:52.241 "supported_io_types": { 00:14:52.241 "read": true, 00:14:52.241 "write": true, 00:14:52.241 "unmap": true, 00:14:52.241 "flush": true, 00:14:52.241 "reset": true, 00:14:52.241 "nvme_admin": false, 00:14:52.241 "nvme_io": false, 00:14:52.241 "nvme_io_md": false, 00:14:52.241 "write_zeroes": true, 00:14:52.241 "zcopy": true, 00:14:52.241 "get_zone_info": false, 00:14:52.241 "zone_management": false, 00:14:52.241 "zone_append": false, 00:14:52.241 "compare": false, 00:14:52.241 "compare_and_write": false, 00:14:52.241 "abort": true, 00:14:52.241 "seek_hole": false, 00:14:52.241 "seek_data": false, 00:14:52.241 "copy": true, 00:14:52.241 "nvme_iov_md": false 00:14:52.241 }, 00:14:52.241 "memory_domains": [ 00:14:52.241 { 00:14:52.241 "dma_device_id": "system", 00:14:52.241 "dma_device_type": 1 00:14:52.241 }, 00:14:52.241 { 00:14:52.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.241 "dma_device_type": 2 00:14:52.241 } 00:14:52.241 ], 00:14:52.241 "driver_specific": {} 00:14:52.241 } 00:14:52.241 ] 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.241 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.501 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.501 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.501 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.501 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.501 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.501 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.501 "name": "Existed_Raid", 00:14:52.501 "uuid": "3374db8d-4ec7-4a4b-b794-739c3139d375", 00:14:52.501 "strip_size_kb": 64, 00:14:52.501 "state": "online", 00:14:52.501 "raid_level": "raid5f", 00:14:52.501 "superblock": true, 00:14:52.501 "num_base_bdevs": 3, 00:14:52.501 "num_base_bdevs_discovered": 3, 00:14:52.501 "num_base_bdevs_operational": 3, 00:14:52.501 "base_bdevs_list": [ 00:14:52.501 { 00:14:52.501 "name": "BaseBdev1", 00:14:52.501 "uuid": "5fdc79cf-1d40-482e-9955-cf614e035dd5", 00:14:52.501 "is_configured": true, 00:14:52.501 "data_offset": 2048, 00:14:52.501 "data_size": 63488 00:14:52.501 }, 00:14:52.501 { 00:14:52.501 "name": "BaseBdev2", 00:14:52.501 "uuid": "19608055-fb58-4886-ab1f-d88aef5ea4fb", 00:14:52.501 "is_configured": true, 00:14:52.501 "data_offset": 2048, 00:14:52.501 "data_size": 63488 00:14:52.501 }, 00:14:52.501 { 00:14:52.501 "name": "BaseBdev3", 00:14:52.501 "uuid": "5a78f49e-8049-4d52-928f-6cfbda5c7373", 00:14:52.501 "is_configured": true, 00:14:52.501 "data_offset": 2048, 00:14:52.501 "data_size": 63488 00:14:52.501 } 00:14:52.501 ] 00:14:52.501 }' 00:14:52.501 20:27:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.501 20:27:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.760 [2024-11-26 20:27:46.262032] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.760 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.760 "name": "Existed_Raid", 00:14:52.760 "aliases": [ 00:14:52.760 "3374db8d-4ec7-4a4b-b794-739c3139d375" 00:14:52.760 ], 00:14:52.760 "product_name": "Raid Volume", 00:14:52.760 "block_size": 512, 00:14:52.760 "num_blocks": 126976, 00:14:52.760 "uuid": "3374db8d-4ec7-4a4b-b794-739c3139d375", 00:14:52.760 "assigned_rate_limits": { 00:14:52.760 "rw_ios_per_sec": 0, 00:14:52.760 "rw_mbytes_per_sec": 0, 00:14:52.760 "r_mbytes_per_sec": 0, 00:14:52.760 "w_mbytes_per_sec": 0 00:14:52.760 }, 00:14:52.760 "claimed": false, 00:14:52.760 "zoned": false, 00:14:52.760 "supported_io_types": { 00:14:52.760 "read": true, 00:14:52.760 "write": true, 00:14:52.760 "unmap": false, 00:14:52.760 "flush": false, 00:14:52.760 "reset": true, 00:14:52.760 "nvme_admin": false, 00:14:52.760 "nvme_io": false, 00:14:52.760 "nvme_io_md": false, 00:14:52.760 "write_zeroes": true, 00:14:52.760 "zcopy": false, 00:14:52.760 "get_zone_info": false, 00:14:52.760 "zone_management": false, 00:14:52.760 "zone_append": false, 00:14:52.760 "compare": false, 00:14:52.760 "compare_and_write": false, 00:14:52.760 "abort": false, 00:14:52.760 "seek_hole": false, 00:14:52.760 "seek_data": false, 00:14:52.760 "copy": false, 00:14:52.760 "nvme_iov_md": false 00:14:52.760 }, 00:14:52.760 "driver_specific": { 00:14:52.760 "raid": { 00:14:52.760 "uuid": "3374db8d-4ec7-4a4b-b794-739c3139d375", 00:14:52.760 "strip_size_kb": 64, 00:14:52.760 "state": "online", 00:14:52.760 "raid_level": "raid5f", 00:14:52.760 "superblock": true, 00:14:52.760 "num_base_bdevs": 3, 00:14:52.760 "num_base_bdevs_discovered": 3, 00:14:52.760 "num_base_bdevs_operational": 3, 00:14:52.760 "base_bdevs_list": [ 00:14:52.760 { 00:14:52.760 "name": "BaseBdev1", 00:14:52.760 "uuid": "5fdc79cf-1d40-482e-9955-cf614e035dd5", 00:14:52.760 "is_configured": true, 00:14:52.760 "data_offset": 2048, 00:14:52.760 "data_size": 63488 00:14:52.760 }, 00:14:52.760 { 00:14:52.760 "name": "BaseBdev2", 00:14:52.760 "uuid": "19608055-fb58-4886-ab1f-d88aef5ea4fb", 00:14:52.760 "is_configured": true, 00:14:52.760 "data_offset": 2048, 00:14:52.760 "data_size": 63488 00:14:52.760 }, 00:14:52.760 { 00:14:52.760 "name": "BaseBdev3", 00:14:52.760 "uuid": "5a78f49e-8049-4d52-928f-6cfbda5c7373", 00:14:52.760 "is_configured": true, 00:14:52.760 "data_offset": 2048, 00:14:52.760 "data_size": 63488 00:14:52.760 } 00:14:52.760 ] 00:14:52.761 } 00:14:52.761 } 00:14:52.761 }' 00:14:52.761 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:53.020 BaseBdev2 00:14:53.020 BaseBdev3' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.020 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.020 [2024-11-26 20:27:46.565398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.280 "name": "Existed_Raid", 00:14:53.280 "uuid": "3374db8d-4ec7-4a4b-b794-739c3139d375", 00:14:53.280 "strip_size_kb": 64, 00:14:53.280 "state": "online", 00:14:53.280 "raid_level": "raid5f", 00:14:53.280 "superblock": true, 00:14:53.280 "num_base_bdevs": 3, 00:14:53.280 "num_base_bdevs_discovered": 2, 00:14:53.280 "num_base_bdevs_operational": 2, 00:14:53.280 "base_bdevs_list": [ 00:14:53.280 { 00:14:53.280 "name": null, 00:14:53.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.280 "is_configured": false, 00:14:53.280 "data_offset": 0, 00:14:53.280 "data_size": 63488 00:14:53.280 }, 00:14:53.280 { 00:14:53.280 "name": "BaseBdev2", 00:14:53.280 "uuid": "19608055-fb58-4886-ab1f-d88aef5ea4fb", 00:14:53.280 "is_configured": true, 00:14:53.280 "data_offset": 2048, 00:14:53.280 "data_size": 63488 00:14:53.280 }, 00:14:53.280 { 00:14:53.280 "name": "BaseBdev3", 00:14:53.280 "uuid": "5a78f49e-8049-4d52-928f-6cfbda5c7373", 00:14:53.280 "is_configured": true, 00:14:53.280 "data_offset": 2048, 00:14:53.280 "data_size": 63488 00:14:53.280 } 00:14:53.280 ] 00:14:53.280 }' 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.280 20:27:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.539 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.539 [2024-11-26 20:27:47.081343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.539 [2024-11-26 20:27:47.081528] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.799 [2024-11-26 20:27:47.102717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.799 [2024-11-26 20:27:47.162718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:53.799 [2024-11-26 20:27:47.162785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.799 BaseBdev2 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.799 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.799 [ 00:14:53.799 { 00:14:53.799 "name": "BaseBdev2", 00:14:53.799 "aliases": [ 00:14:53.799 "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e" 00:14:53.799 ], 00:14:53.799 "product_name": "Malloc disk", 00:14:53.799 "block_size": 512, 00:14:53.799 "num_blocks": 65536, 00:14:53.799 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:53.799 "assigned_rate_limits": { 00:14:53.799 "rw_ios_per_sec": 0, 00:14:53.799 "rw_mbytes_per_sec": 0, 00:14:53.799 "r_mbytes_per_sec": 0, 00:14:53.799 "w_mbytes_per_sec": 0 00:14:53.799 }, 00:14:53.799 "claimed": false, 00:14:53.799 "zoned": false, 00:14:53.799 "supported_io_types": { 00:14:53.799 "read": true, 00:14:53.799 "write": true, 00:14:53.799 "unmap": true, 00:14:53.799 "flush": true, 00:14:53.799 "reset": true, 00:14:53.799 "nvme_admin": false, 00:14:53.799 "nvme_io": false, 00:14:53.799 "nvme_io_md": false, 00:14:53.799 "write_zeroes": true, 00:14:53.799 "zcopy": true, 00:14:53.799 "get_zone_info": false, 00:14:53.799 "zone_management": false, 00:14:53.799 "zone_append": false, 00:14:53.799 "compare": false, 00:14:53.799 "compare_and_write": false, 00:14:53.799 "abort": true, 00:14:53.799 "seek_hole": false, 00:14:53.799 "seek_data": false, 00:14:53.799 "copy": true, 00:14:53.799 "nvme_iov_md": false 00:14:53.799 }, 00:14:53.799 "memory_domains": [ 00:14:53.799 { 00:14:53.799 "dma_device_id": "system", 00:14:53.799 "dma_device_type": 1 00:14:53.799 }, 00:14:53.799 { 00:14:53.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.799 "dma_device_type": 2 00:14:53.799 } 00:14:53.799 ], 00:14:53.800 "driver_specific": {} 00:14:53.800 } 00:14:53.800 ] 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.800 BaseBdev3 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.800 [ 00:14:53.800 { 00:14:53.800 "name": "BaseBdev3", 00:14:53.800 "aliases": [ 00:14:53.800 "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0" 00:14:53.800 ], 00:14:53.800 "product_name": "Malloc disk", 00:14:53.800 "block_size": 512, 00:14:53.800 "num_blocks": 65536, 00:14:53.800 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:53.800 "assigned_rate_limits": { 00:14:53.800 "rw_ios_per_sec": 0, 00:14:53.800 "rw_mbytes_per_sec": 0, 00:14:53.800 "r_mbytes_per_sec": 0, 00:14:53.800 "w_mbytes_per_sec": 0 00:14:53.800 }, 00:14:53.800 "claimed": false, 00:14:53.800 "zoned": false, 00:14:53.800 "supported_io_types": { 00:14:53.800 "read": true, 00:14:53.800 "write": true, 00:14:53.800 "unmap": true, 00:14:53.800 "flush": true, 00:14:53.800 "reset": true, 00:14:53.800 "nvme_admin": false, 00:14:53.800 "nvme_io": false, 00:14:53.800 "nvme_io_md": false, 00:14:53.800 "write_zeroes": true, 00:14:53.800 "zcopy": true, 00:14:53.800 "get_zone_info": false, 00:14:53.800 "zone_management": false, 00:14:53.800 "zone_append": false, 00:14:53.800 "compare": false, 00:14:53.800 "compare_and_write": false, 00:14:53.800 "abort": true, 00:14:53.800 "seek_hole": false, 00:14:53.800 "seek_data": false, 00:14:53.800 "copy": true, 00:14:53.800 "nvme_iov_md": false 00:14:53.800 }, 00:14:53.800 "memory_domains": [ 00:14:53.800 { 00:14:53.800 "dma_device_id": "system", 00:14:53.800 "dma_device_type": 1 00:14:53.800 }, 00:14:53.800 { 00:14:53.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.800 "dma_device_type": 2 00:14:53.800 } 00:14:53.800 ], 00:14:53.800 "driver_specific": {} 00:14:53.800 } 00:14:53.800 ] 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.800 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.800 [2024-11-26 20:27:47.347829] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.800 [2024-11-26 20:27:47.347882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.800 [2024-11-26 20:27:47.347924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.058 [2024-11-26 20:27:47.349994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.058 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.058 "name": "Existed_Raid", 00:14:54.058 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:54.058 "strip_size_kb": 64, 00:14:54.058 "state": "configuring", 00:14:54.058 "raid_level": "raid5f", 00:14:54.058 "superblock": true, 00:14:54.059 "num_base_bdevs": 3, 00:14:54.059 "num_base_bdevs_discovered": 2, 00:14:54.059 "num_base_bdevs_operational": 3, 00:14:54.059 "base_bdevs_list": [ 00:14:54.059 { 00:14:54.059 "name": "BaseBdev1", 00:14:54.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.059 "is_configured": false, 00:14:54.059 "data_offset": 0, 00:14:54.059 "data_size": 0 00:14:54.059 }, 00:14:54.059 { 00:14:54.059 "name": "BaseBdev2", 00:14:54.059 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:54.059 "is_configured": true, 00:14:54.059 "data_offset": 2048, 00:14:54.059 "data_size": 63488 00:14:54.059 }, 00:14:54.059 { 00:14:54.059 "name": "BaseBdev3", 00:14:54.059 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:54.059 "is_configured": true, 00:14:54.059 "data_offset": 2048, 00:14:54.059 "data_size": 63488 00:14:54.059 } 00:14:54.059 ] 00:14:54.059 }' 00:14:54.059 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.059 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.317 [2024-11-26 20:27:47.795090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.317 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.317 "name": "Existed_Raid", 00:14:54.317 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:54.317 "strip_size_kb": 64, 00:14:54.318 "state": "configuring", 00:14:54.318 "raid_level": "raid5f", 00:14:54.318 "superblock": true, 00:14:54.318 "num_base_bdevs": 3, 00:14:54.318 "num_base_bdevs_discovered": 1, 00:14:54.318 "num_base_bdevs_operational": 3, 00:14:54.318 "base_bdevs_list": [ 00:14:54.318 { 00:14:54.318 "name": "BaseBdev1", 00:14:54.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.318 "is_configured": false, 00:14:54.318 "data_offset": 0, 00:14:54.318 "data_size": 0 00:14:54.318 }, 00:14:54.318 { 00:14:54.318 "name": null, 00:14:54.318 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:54.318 "is_configured": false, 00:14:54.318 "data_offset": 0, 00:14:54.318 "data_size": 63488 00:14:54.318 }, 00:14:54.318 { 00:14:54.318 "name": "BaseBdev3", 00:14:54.318 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:54.318 "is_configured": true, 00:14:54.318 "data_offset": 2048, 00:14:54.318 "data_size": 63488 00:14:54.318 } 00:14:54.318 ] 00:14:54.318 }' 00:14:54.318 20:27:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.318 20:27:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.886 [2024-11-26 20:27:48.350835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.886 BaseBdev1 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.886 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.886 [ 00:14:54.886 { 00:14:54.886 "name": "BaseBdev1", 00:14:54.886 "aliases": [ 00:14:54.886 "684ab62f-2f3c-408e-9bac-57075fee2e36" 00:14:54.887 ], 00:14:54.887 "product_name": "Malloc disk", 00:14:54.887 "block_size": 512, 00:14:54.887 "num_blocks": 65536, 00:14:54.887 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:54.887 "assigned_rate_limits": { 00:14:54.887 "rw_ios_per_sec": 0, 00:14:54.887 "rw_mbytes_per_sec": 0, 00:14:54.887 "r_mbytes_per_sec": 0, 00:14:54.887 "w_mbytes_per_sec": 0 00:14:54.887 }, 00:14:54.887 "claimed": true, 00:14:54.887 "claim_type": "exclusive_write", 00:14:54.887 "zoned": false, 00:14:54.887 "supported_io_types": { 00:14:54.887 "read": true, 00:14:54.887 "write": true, 00:14:54.887 "unmap": true, 00:14:54.887 "flush": true, 00:14:54.887 "reset": true, 00:14:54.887 "nvme_admin": false, 00:14:54.887 "nvme_io": false, 00:14:54.887 "nvme_io_md": false, 00:14:54.887 "write_zeroes": true, 00:14:54.887 "zcopy": true, 00:14:54.887 "get_zone_info": false, 00:14:54.887 "zone_management": false, 00:14:54.887 "zone_append": false, 00:14:54.887 "compare": false, 00:14:54.887 "compare_and_write": false, 00:14:54.887 "abort": true, 00:14:54.887 "seek_hole": false, 00:14:54.887 "seek_data": false, 00:14:54.887 "copy": true, 00:14:54.887 "nvme_iov_md": false 00:14:54.887 }, 00:14:54.887 "memory_domains": [ 00:14:54.887 { 00:14:54.887 "dma_device_id": "system", 00:14:54.887 "dma_device_type": 1 00:14:54.887 }, 00:14:54.887 { 00:14:54.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.887 "dma_device_type": 2 00:14:54.887 } 00:14:54.887 ], 00:14:54.887 "driver_specific": {} 00:14:54.887 } 00:14:54.887 ] 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.887 "name": "Existed_Raid", 00:14:54.887 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:54.887 "strip_size_kb": 64, 00:14:54.887 "state": "configuring", 00:14:54.887 "raid_level": "raid5f", 00:14:54.887 "superblock": true, 00:14:54.887 "num_base_bdevs": 3, 00:14:54.887 "num_base_bdevs_discovered": 2, 00:14:54.887 "num_base_bdevs_operational": 3, 00:14:54.887 "base_bdevs_list": [ 00:14:54.887 { 00:14:54.887 "name": "BaseBdev1", 00:14:54.887 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:54.887 "is_configured": true, 00:14:54.887 "data_offset": 2048, 00:14:54.887 "data_size": 63488 00:14:54.887 }, 00:14:54.887 { 00:14:54.887 "name": null, 00:14:54.887 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:54.887 "is_configured": false, 00:14:54.887 "data_offset": 0, 00:14:54.887 "data_size": 63488 00:14:54.887 }, 00:14:54.887 { 00:14:54.887 "name": "BaseBdev3", 00:14:54.887 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:54.887 "is_configured": true, 00:14:54.887 "data_offset": 2048, 00:14:54.887 "data_size": 63488 00:14:54.887 } 00:14:54.887 ] 00:14:54.887 }' 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.887 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.455 [2024-11-26 20:27:48.902002] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.455 "name": "Existed_Raid", 00:14:55.455 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:55.455 "strip_size_kb": 64, 00:14:55.455 "state": "configuring", 00:14:55.455 "raid_level": "raid5f", 00:14:55.455 "superblock": true, 00:14:55.455 "num_base_bdevs": 3, 00:14:55.455 "num_base_bdevs_discovered": 1, 00:14:55.455 "num_base_bdevs_operational": 3, 00:14:55.455 "base_bdevs_list": [ 00:14:55.455 { 00:14:55.455 "name": "BaseBdev1", 00:14:55.455 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:55.455 "is_configured": true, 00:14:55.455 "data_offset": 2048, 00:14:55.455 "data_size": 63488 00:14:55.455 }, 00:14:55.455 { 00:14:55.455 "name": null, 00:14:55.455 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:55.455 "is_configured": false, 00:14:55.455 "data_offset": 0, 00:14:55.455 "data_size": 63488 00:14:55.455 }, 00:14:55.455 { 00:14:55.455 "name": null, 00:14:55.455 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:55.455 "is_configured": false, 00:14:55.455 "data_offset": 0, 00:14:55.455 "data_size": 63488 00:14:55.455 } 00:14:55.455 ] 00:14:55.455 }' 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.455 20:27:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.022 [2024-11-26 20:27:49.417140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.022 "name": "Existed_Raid", 00:14:56.022 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:56.022 "strip_size_kb": 64, 00:14:56.022 "state": "configuring", 00:14:56.022 "raid_level": "raid5f", 00:14:56.022 "superblock": true, 00:14:56.022 "num_base_bdevs": 3, 00:14:56.022 "num_base_bdevs_discovered": 2, 00:14:56.022 "num_base_bdevs_operational": 3, 00:14:56.022 "base_bdevs_list": [ 00:14:56.022 { 00:14:56.022 "name": "BaseBdev1", 00:14:56.022 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:56.022 "is_configured": true, 00:14:56.022 "data_offset": 2048, 00:14:56.022 "data_size": 63488 00:14:56.022 }, 00:14:56.022 { 00:14:56.022 "name": null, 00:14:56.022 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:56.022 "is_configured": false, 00:14:56.022 "data_offset": 0, 00:14:56.022 "data_size": 63488 00:14:56.022 }, 00:14:56.022 { 00:14:56.022 "name": "BaseBdev3", 00:14:56.022 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:56.022 "is_configured": true, 00:14:56.022 "data_offset": 2048, 00:14:56.022 "data_size": 63488 00:14:56.022 } 00:14:56.022 ] 00:14:56.022 }' 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.022 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.591 [2024-11-26 20:27:49.940426] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.591 20:27:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.591 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.591 "name": "Existed_Raid", 00:14:56.591 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:56.591 "strip_size_kb": 64, 00:14:56.591 "state": "configuring", 00:14:56.591 "raid_level": "raid5f", 00:14:56.591 "superblock": true, 00:14:56.591 "num_base_bdevs": 3, 00:14:56.591 "num_base_bdevs_discovered": 1, 00:14:56.591 "num_base_bdevs_operational": 3, 00:14:56.591 "base_bdevs_list": [ 00:14:56.591 { 00:14:56.591 "name": null, 00:14:56.591 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:56.591 "is_configured": false, 00:14:56.591 "data_offset": 0, 00:14:56.591 "data_size": 63488 00:14:56.591 }, 00:14:56.591 { 00:14:56.591 "name": null, 00:14:56.591 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:56.591 "is_configured": false, 00:14:56.591 "data_offset": 0, 00:14:56.591 "data_size": 63488 00:14:56.591 }, 00:14:56.591 { 00:14:56.591 "name": "BaseBdev3", 00:14:56.591 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:56.591 "is_configured": true, 00:14:56.591 "data_offset": 2048, 00:14:56.591 "data_size": 63488 00:14:56.591 } 00:14:56.591 ] 00:14:56.591 }' 00:14:56.591 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.591 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.851 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.852 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.852 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.852 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.852 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.111 [2024-11-26 20:27:50.415780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.111 "name": "Existed_Raid", 00:14:57.111 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:57.111 "strip_size_kb": 64, 00:14:57.111 "state": "configuring", 00:14:57.111 "raid_level": "raid5f", 00:14:57.111 "superblock": true, 00:14:57.111 "num_base_bdevs": 3, 00:14:57.111 "num_base_bdevs_discovered": 2, 00:14:57.111 "num_base_bdevs_operational": 3, 00:14:57.111 "base_bdevs_list": [ 00:14:57.111 { 00:14:57.111 "name": null, 00:14:57.111 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:57.111 "is_configured": false, 00:14:57.111 "data_offset": 0, 00:14:57.111 "data_size": 63488 00:14:57.111 }, 00:14:57.111 { 00:14:57.111 "name": "BaseBdev2", 00:14:57.111 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:57.111 "is_configured": true, 00:14:57.111 "data_offset": 2048, 00:14:57.111 "data_size": 63488 00:14:57.111 }, 00:14:57.111 { 00:14:57.111 "name": "BaseBdev3", 00:14:57.111 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:57.111 "is_configured": true, 00:14:57.111 "data_offset": 2048, 00:14:57.111 "data_size": 63488 00:14:57.111 } 00:14:57.111 ] 00:14:57.111 }' 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.111 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.371 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.371 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:57.371 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.371 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.371 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 684ab62f-2f3c-408e-9bac-57075fee2e36 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.632 20:27:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.633 [2024-11-26 20:27:51.011466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:57.633 [2024-11-26 20:27:51.011684] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:57.633 [2024-11-26 20:27:51.011701] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.633 [2024-11-26 20:27:51.011950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:57.633 NewBaseBdev 00:14:57.633 [2024-11-26 20:27:51.012469] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:57.633 [2024-11-26 20:27:51.012486] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:57.633 [2024-11-26 20:27:51.012589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.633 [ 00:14:57.633 { 00:14:57.633 "name": "NewBaseBdev", 00:14:57.633 "aliases": [ 00:14:57.633 "684ab62f-2f3c-408e-9bac-57075fee2e36" 00:14:57.633 ], 00:14:57.633 "product_name": "Malloc disk", 00:14:57.633 "block_size": 512, 00:14:57.633 "num_blocks": 65536, 00:14:57.633 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:57.633 "assigned_rate_limits": { 00:14:57.633 "rw_ios_per_sec": 0, 00:14:57.633 "rw_mbytes_per_sec": 0, 00:14:57.633 "r_mbytes_per_sec": 0, 00:14:57.633 "w_mbytes_per_sec": 0 00:14:57.633 }, 00:14:57.633 "claimed": true, 00:14:57.633 "claim_type": "exclusive_write", 00:14:57.633 "zoned": false, 00:14:57.633 "supported_io_types": { 00:14:57.633 "read": true, 00:14:57.633 "write": true, 00:14:57.633 "unmap": true, 00:14:57.633 "flush": true, 00:14:57.633 "reset": true, 00:14:57.633 "nvme_admin": false, 00:14:57.633 "nvme_io": false, 00:14:57.633 "nvme_io_md": false, 00:14:57.633 "write_zeroes": true, 00:14:57.633 "zcopy": true, 00:14:57.633 "get_zone_info": false, 00:14:57.633 "zone_management": false, 00:14:57.633 "zone_append": false, 00:14:57.633 "compare": false, 00:14:57.633 "compare_and_write": false, 00:14:57.633 "abort": true, 00:14:57.633 "seek_hole": false, 00:14:57.633 "seek_data": false, 00:14:57.633 "copy": true, 00:14:57.633 "nvme_iov_md": false 00:14:57.633 }, 00:14:57.633 "memory_domains": [ 00:14:57.633 { 00:14:57.633 "dma_device_id": "system", 00:14:57.633 "dma_device_type": 1 00:14:57.633 }, 00:14:57.633 { 00:14:57.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.633 "dma_device_type": 2 00:14:57.633 } 00:14:57.633 ], 00:14:57.633 "driver_specific": {} 00:14:57.633 } 00:14:57.633 ] 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.633 "name": "Existed_Raid", 00:14:57.633 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:57.633 "strip_size_kb": 64, 00:14:57.633 "state": "online", 00:14:57.633 "raid_level": "raid5f", 00:14:57.633 "superblock": true, 00:14:57.633 "num_base_bdevs": 3, 00:14:57.633 "num_base_bdevs_discovered": 3, 00:14:57.633 "num_base_bdevs_operational": 3, 00:14:57.633 "base_bdevs_list": [ 00:14:57.633 { 00:14:57.633 "name": "NewBaseBdev", 00:14:57.633 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:57.633 "is_configured": true, 00:14:57.633 "data_offset": 2048, 00:14:57.633 "data_size": 63488 00:14:57.633 }, 00:14:57.633 { 00:14:57.633 "name": "BaseBdev2", 00:14:57.633 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:57.633 "is_configured": true, 00:14:57.633 "data_offset": 2048, 00:14:57.633 "data_size": 63488 00:14:57.633 }, 00:14:57.633 { 00:14:57.633 "name": "BaseBdev3", 00:14:57.633 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:57.633 "is_configured": true, 00:14:57.633 "data_offset": 2048, 00:14:57.633 "data_size": 63488 00:14:57.633 } 00:14:57.633 ] 00:14:57.633 }' 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.633 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.203 [2024-11-26 20:27:51.466963] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.203 "name": "Existed_Raid", 00:14:58.203 "aliases": [ 00:14:58.203 "1539f52c-ab2a-4aa1-b559-ca83cb4c7381" 00:14:58.203 ], 00:14:58.203 "product_name": "Raid Volume", 00:14:58.203 "block_size": 512, 00:14:58.203 "num_blocks": 126976, 00:14:58.203 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:58.203 "assigned_rate_limits": { 00:14:58.203 "rw_ios_per_sec": 0, 00:14:58.203 "rw_mbytes_per_sec": 0, 00:14:58.203 "r_mbytes_per_sec": 0, 00:14:58.203 "w_mbytes_per_sec": 0 00:14:58.203 }, 00:14:58.203 "claimed": false, 00:14:58.203 "zoned": false, 00:14:58.203 "supported_io_types": { 00:14:58.203 "read": true, 00:14:58.203 "write": true, 00:14:58.203 "unmap": false, 00:14:58.203 "flush": false, 00:14:58.203 "reset": true, 00:14:58.203 "nvme_admin": false, 00:14:58.203 "nvme_io": false, 00:14:58.203 "nvme_io_md": false, 00:14:58.203 "write_zeroes": true, 00:14:58.203 "zcopy": false, 00:14:58.203 "get_zone_info": false, 00:14:58.203 "zone_management": false, 00:14:58.203 "zone_append": false, 00:14:58.203 "compare": false, 00:14:58.203 "compare_and_write": false, 00:14:58.203 "abort": false, 00:14:58.203 "seek_hole": false, 00:14:58.203 "seek_data": false, 00:14:58.203 "copy": false, 00:14:58.203 "nvme_iov_md": false 00:14:58.203 }, 00:14:58.203 "driver_specific": { 00:14:58.203 "raid": { 00:14:58.203 "uuid": "1539f52c-ab2a-4aa1-b559-ca83cb4c7381", 00:14:58.203 "strip_size_kb": 64, 00:14:58.203 "state": "online", 00:14:58.203 "raid_level": "raid5f", 00:14:58.203 "superblock": true, 00:14:58.203 "num_base_bdevs": 3, 00:14:58.203 "num_base_bdevs_discovered": 3, 00:14:58.203 "num_base_bdevs_operational": 3, 00:14:58.203 "base_bdevs_list": [ 00:14:58.203 { 00:14:58.203 "name": "NewBaseBdev", 00:14:58.203 "uuid": "684ab62f-2f3c-408e-9bac-57075fee2e36", 00:14:58.203 "is_configured": true, 00:14:58.203 "data_offset": 2048, 00:14:58.203 "data_size": 63488 00:14:58.203 }, 00:14:58.203 { 00:14:58.203 "name": "BaseBdev2", 00:14:58.203 "uuid": "8c60bb17-d747-4d0a-ad06-8ed6452c2f8e", 00:14:58.203 "is_configured": true, 00:14:58.203 "data_offset": 2048, 00:14:58.203 "data_size": 63488 00:14:58.203 }, 00:14:58.203 { 00:14:58.203 "name": "BaseBdev3", 00:14:58.203 "uuid": "31ecfb0b-c291-4704-b208-3d0d1dc1e1d0", 00:14:58.203 "is_configured": true, 00:14:58.203 "data_offset": 2048, 00:14:58.203 "data_size": 63488 00:14:58.203 } 00:14:58.203 ] 00:14:58.203 } 00:14:58.203 } 00:14:58.203 }' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:58.203 BaseBdev2 00:14:58.203 BaseBdev3' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.203 [2024-11-26 20:27:51.694396] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.203 [2024-11-26 20:27:51.694429] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.203 [2024-11-26 20:27:51.694506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.203 [2024-11-26 20:27:51.694801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.203 [2024-11-26 20:27:51.694825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91634 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91634 ']' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91634 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91634 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:58.203 killing process with pid 91634 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91634' 00:14:58.203 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91634 00:14:58.203 [2024-11-26 20:27:51.746326] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.204 20:27:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91634 00:14:58.463 [2024-11-26 20:27:51.799074] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.722 20:27:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:58.722 00:14:58.722 real 0m9.314s 00:14:58.722 user 0m15.639s 00:14:58.722 sys 0m2.029s 00:14:58.722 20:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.722 20:27:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.722 ************************************ 00:14:58.722 END TEST raid5f_state_function_test_sb 00:14:58.722 ************************************ 00:14:58.722 20:27:52 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:58.722 20:27:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:58.722 20:27:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.722 20:27:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.722 ************************************ 00:14:58.722 START TEST raid5f_superblock_test 00:14:58.722 ************************************ 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=92242 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 92242 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 92242 ']' 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.722 20:27:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.078 [2024-11-26 20:27:52.305313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:59.078 [2024-11-26 20:27:52.305446] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92242 ] 00:14:59.078 [2024-11-26 20:27:52.479445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.078 [2024-11-26 20:27:52.585976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.337 [2024-11-26 20:27:52.657458] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.337 [2024-11-26 20:27:52.657508] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.907 malloc1 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.907 [2024-11-26 20:27:53.177868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.907 [2024-11-26 20:27:53.177960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.907 [2024-11-26 20:27:53.177984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:59.907 [2024-11-26 20:27:53.177999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.907 [2024-11-26 20:27:53.180298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.907 [2024-11-26 20:27:53.180339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.907 pt1 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.907 malloc2 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.907 [2024-11-26 20:27:53.223945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:59.907 [2024-11-26 20:27:53.224014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.907 [2024-11-26 20:27:53.224033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:59.907 [2024-11-26 20:27:53.224046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.907 [2024-11-26 20:27:53.226561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.907 [2024-11-26 20:27:53.226606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:59.907 pt2 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.907 malloc3 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.907 [2024-11-26 20:27:53.258450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:59.907 [2024-11-26 20:27:53.258513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.907 [2024-11-26 20:27:53.258532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:59.907 [2024-11-26 20:27:53.258544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.907 [2024-11-26 20:27:53.261018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.907 [2024-11-26 20:27:53.261064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:59.907 pt3 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.907 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.907 [2024-11-26 20:27:53.270478] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.907 [2024-11-26 20:27:53.272650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.907 [2024-11-26 20:27:53.272725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:59.907 [2024-11-26 20:27:53.272900] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:59.907 [2024-11-26 20:27:53.272921] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:59.908 [2024-11-26 20:27:53.273247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:59.908 [2024-11-26 20:27:53.273788] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:59.908 [2024-11-26 20:27:53.273815] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:59.908 [2024-11-26 20:27:53.273976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.908 "name": "raid_bdev1", 00:14:59.908 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:14:59.908 "strip_size_kb": 64, 00:14:59.908 "state": "online", 00:14:59.908 "raid_level": "raid5f", 00:14:59.908 "superblock": true, 00:14:59.908 "num_base_bdevs": 3, 00:14:59.908 "num_base_bdevs_discovered": 3, 00:14:59.908 "num_base_bdevs_operational": 3, 00:14:59.908 "base_bdevs_list": [ 00:14:59.908 { 00:14:59.908 "name": "pt1", 00:14:59.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.908 "is_configured": true, 00:14:59.908 "data_offset": 2048, 00:14:59.908 "data_size": 63488 00:14:59.908 }, 00:14:59.908 { 00:14:59.908 "name": "pt2", 00:14:59.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.908 "is_configured": true, 00:14:59.908 "data_offset": 2048, 00:14:59.908 "data_size": 63488 00:14:59.908 }, 00:14:59.908 { 00:14:59.908 "name": "pt3", 00:14:59.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.908 "is_configured": true, 00:14:59.908 "data_offset": 2048, 00:14:59.908 "data_size": 63488 00:14:59.908 } 00:14:59.908 ] 00:14:59.908 }' 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.908 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.477 [2024-11-26 20:27:53.735566] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.477 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.477 "name": "raid_bdev1", 00:15:00.477 "aliases": [ 00:15:00.477 "fcb1ad1f-231a-41fd-9fca-bf65f2ade625" 00:15:00.477 ], 00:15:00.477 "product_name": "Raid Volume", 00:15:00.477 "block_size": 512, 00:15:00.477 "num_blocks": 126976, 00:15:00.477 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:00.477 "assigned_rate_limits": { 00:15:00.477 "rw_ios_per_sec": 0, 00:15:00.477 "rw_mbytes_per_sec": 0, 00:15:00.477 "r_mbytes_per_sec": 0, 00:15:00.477 "w_mbytes_per_sec": 0 00:15:00.477 }, 00:15:00.477 "claimed": false, 00:15:00.477 "zoned": false, 00:15:00.477 "supported_io_types": { 00:15:00.477 "read": true, 00:15:00.477 "write": true, 00:15:00.477 "unmap": false, 00:15:00.477 "flush": false, 00:15:00.477 "reset": true, 00:15:00.477 "nvme_admin": false, 00:15:00.477 "nvme_io": false, 00:15:00.477 "nvme_io_md": false, 00:15:00.477 "write_zeroes": true, 00:15:00.477 "zcopy": false, 00:15:00.477 "get_zone_info": false, 00:15:00.477 "zone_management": false, 00:15:00.477 "zone_append": false, 00:15:00.477 "compare": false, 00:15:00.477 "compare_and_write": false, 00:15:00.477 "abort": false, 00:15:00.477 "seek_hole": false, 00:15:00.477 "seek_data": false, 00:15:00.477 "copy": false, 00:15:00.477 "nvme_iov_md": false 00:15:00.477 }, 00:15:00.477 "driver_specific": { 00:15:00.477 "raid": { 00:15:00.477 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:00.477 "strip_size_kb": 64, 00:15:00.477 "state": "online", 00:15:00.477 "raid_level": "raid5f", 00:15:00.477 "superblock": true, 00:15:00.477 "num_base_bdevs": 3, 00:15:00.477 "num_base_bdevs_discovered": 3, 00:15:00.478 "num_base_bdevs_operational": 3, 00:15:00.478 "base_bdevs_list": [ 00:15:00.478 { 00:15:00.478 "name": "pt1", 00:15:00.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.478 "is_configured": true, 00:15:00.478 "data_offset": 2048, 00:15:00.478 "data_size": 63488 00:15:00.478 }, 00:15:00.478 { 00:15:00.478 "name": "pt2", 00:15:00.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.478 "is_configured": true, 00:15:00.478 "data_offset": 2048, 00:15:00.478 "data_size": 63488 00:15:00.478 }, 00:15:00.478 { 00:15:00.478 "name": "pt3", 00:15:00.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.478 "is_configured": true, 00:15:00.478 "data_offset": 2048, 00:15:00.478 "data_size": 63488 00:15:00.478 } 00:15:00.478 ] 00:15:00.478 } 00:15:00.478 } 00:15:00.478 }' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:00.478 pt2 00:15:00.478 pt3' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:00.478 [2024-11-26 20:27:53.983109] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.478 20:27:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.478 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fcb1ad1f-231a-41fd-9fca-bf65f2ade625 00:15:00.478 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fcb1ad1f-231a-41fd-9fca-bf65f2ade625 ']' 00:15:00.478 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.478 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.478 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 [2024-11-26 20:27:54.030798] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.738 [2024-11-26 20:27:54.030832] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.738 [2024-11-26 20:27:54.030949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.738 [2024-11-26 20:27:54.031041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.738 [2024-11-26 20:27:54.031054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 [2024-11-26 20:27:54.174587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:00.738 [2024-11-26 20:27:54.176677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:00.738 [2024-11-26 20:27:54.176743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:00.738 [2024-11-26 20:27:54.176797] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:00.738 [2024-11-26 20:27:54.176846] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:00.738 [2024-11-26 20:27:54.176868] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:00.738 [2024-11-26 20:27:54.176882] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.738 [2024-11-26 20:27:54.176895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:00.738 request: 00:15:00.738 { 00:15:00.738 "name": "raid_bdev1", 00:15:00.738 "raid_level": "raid5f", 00:15:00.738 "base_bdevs": [ 00:15:00.738 "malloc1", 00:15:00.738 "malloc2", 00:15:00.738 "malloc3" 00:15:00.738 ], 00:15:00.738 "strip_size_kb": 64, 00:15:00.738 "superblock": false, 00:15:00.738 "method": "bdev_raid_create", 00:15:00.738 "req_id": 1 00:15:00.738 } 00:15:00.738 Got JSON-RPC error response 00:15:00.738 response: 00:15:00.738 { 00:15:00.738 "code": -17, 00:15:00.738 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:00.738 } 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 [2024-11-26 20:27:54.238439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.738 [2024-11-26 20:27:54.238505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.738 [2024-11-26 20:27:54.238523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:00.738 [2024-11-26 20:27:54.238535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.738 [2024-11-26 20:27:54.240966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.738 [2024-11-26 20:27:54.241009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.738 [2024-11-26 20:27:54.241091] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:00.738 [2024-11-26 20:27:54.241133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.738 pt1 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.738 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.998 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.998 "name": "raid_bdev1", 00:15:00.998 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:00.998 "strip_size_kb": 64, 00:15:00.998 "state": "configuring", 00:15:00.998 "raid_level": "raid5f", 00:15:00.998 "superblock": true, 00:15:00.998 "num_base_bdevs": 3, 00:15:00.998 "num_base_bdevs_discovered": 1, 00:15:00.998 "num_base_bdevs_operational": 3, 00:15:00.998 "base_bdevs_list": [ 00:15:00.998 { 00:15:00.998 "name": "pt1", 00:15:00.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.998 "is_configured": true, 00:15:00.998 "data_offset": 2048, 00:15:00.998 "data_size": 63488 00:15:00.998 }, 00:15:00.998 { 00:15:00.998 "name": null, 00:15:00.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.998 "is_configured": false, 00:15:00.998 "data_offset": 2048, 00:15:00.998 "data_size": 63488 00:15:00.998 }, 00:15:00.998 { 00:15:00.998 "name": null, 00:15:00.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.998 "is_configured": false, 00:15:00.998 "data_offset": 2048, 00:15:00.998 "data_size": 63488 00:15:00.998 } 00:15:00.998 ] 00:15:00.998 }' 00:15:00.998 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.998 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.258 [2024-11-26 20:27:54.693733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.258 [2024-11-26 20:27:54.693809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.258 [2024-11-26 20:27:54.693832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:01.258 [2024-11-26 20:27:54.693846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.258 [2024-11-26 20:27:54.694310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.258 [2024-11-26 20:27:54.694343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.258 [2024-11-26 20:27:54.694425] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.258 [2024-11-26 20:27:54.694452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.258 pt2 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.258 [2024-11-26 20:27:54.705745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.258 "name": "raid_bdev1", 00:15:01.258 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:01.258 "strip_size_kb": 64, 00:15:01.258 "state": "configuring", 00:15:01.258 "raid_level": "raid5f", 00:15:01.258 "superblock": true, 00:15:01.258 "num_base_bdevs": 3, 00:15:01.258 "num_base_bdevs_discovered": 1, 00:15:01.258 "num_base_bdevs_operational": 3, 00:15:01.258 "base_bdevs_list": [ 00:15:01.258 { 00:15:01.258 "name": "pt1", 00:15:01.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.258 "is_configured": true, 00:15:01.258 "data_offset": 2048, 00:15:01.258 "data_size": 63488 00:15:01.258 }, 00:15:01.258 { 00:15:01.258 "name": null, 00:15:01.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.258 "is_configured": false, 00:15:01.258 "data_offset": 0, 00:15:01.258 "data_size": 63488 00:15:01.258 }, 00:15:01.258 { 00:15:01.258 "name": null, 00:15:01.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.258 "is_configured": false, 00:15:01.258 "data_offset": 2048, 00:15:01.258 "data_size": 63488 00:15:01.258 } 00:15:01.258 ] 00:15:01.258 }' 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.258 20:27:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 [2024-11-26 20:27:55.129034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.828 [2024-11-26 20:27:55.129118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.828 [2024-11-26 20:27:55.129140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:01.828 [2024-11-26 20:27:55.129151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.828 [2024-11-26 20:27:55.129622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.828 [2024-11-26 20:27:55.129662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.828 [2024-11-26 20:27:55.129753] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.828 [2024-11-26 20:27:55.129778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.828 pt2 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 [2024-11-26 20:27:55.141013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:01.828 [2024-11-26 20:27:55.141084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.828 [2024-11-26 20:27:55.141109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:01.828 [2024-11-26 20:27:55.141119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.828 [2024-11-26 20:27:55.141585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.828 [2024-11-26 20:27:55.141612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:01.828 [2024-11-26 20:27:55.141718] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:01.828 [2024-11-26 20:27:55.141745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:01.828 [2024-11-26 20:27:55.141873] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:01.828 [2024-11-26 20:27:55.141892] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:01.828 [2024-11-26 20:27:55.142163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:01.828 [2024-11-26 20:27:55.142674] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:01.828 [2024-11-26 20:27:55.142697] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:01.828 [2024-11-26 20:27:55.142817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.828 pt3 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.828 "name": "raid_bdev1", 00:15:01.828 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:01.828 "strip_size_kb": 64, 00:15:01.828 "state": "online", 00:15:01.828 "raid_level": "raid5f", 00:15:01.828 "superblock": true, 00:15:01.828 "num_base_bdevs": 3, 00:15:01.828 "num_base_bdevs_discovered": 3, 00:15:01.828 "num_base_bdevs_operational": 3, 00:15:01.828 "base_bdevs_list": [ 00:15:01.828 { 00:15:01.828 "name": "pt1", 00:15:01.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.828 "is_configured": true, 00:15:01.828 "data_offset": 2048, 00:15:01.828 "data_size": 63488 00:15:01.828 }, 00:15:01.828 { 00:15:01.828 "name": "pt2", 00:15:01.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.828 "is_configured": true, 00:15:01.828 "data_offset": 2048, 00:15:01.828 "data_size": 63488 00:15:01.828 }, 00:15:01.828 { 00:15:01.828 "name": "pt3", 00:15:01.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.828 "is_configured": true, 00:15:01.828 "data_offset": 2048, 00:15:01.828 "data_size": 63488 00:15:01.828 } 00:15:01.828 ] 00:15:01.828 }' 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.828 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.086 [2024-11-26 20:27:55.608430] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.086 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.345 "name": "raid_bdev1", 00:15:02.345 "aliases": [ 00:15:02.345 "fcb1ad1f-231a-41fd-9fca-bf65f2ade625" 00:15:02.345 ], 00:15:02.345 "product_name": "Raid Volume", 00:15:02.345 "block_size": 512, 00:15:02.345 "num_blocks": 126976, 00:15:02.345 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:02.345 "assigned_rate_limits": { 00:15:02.345 "rw_ios_per_sec": 0, 00:15:02.345 "rw_mbytes_per_sec": 0, 00:15:02.345 "r_mbytes_per_sec": 0, 00:15:02.345 "w_mbytes_per_sec": 0 00:15:02.345 }, 00:15:02.345 "claimed": false, 00:15:02.345 "zoned": false, 00:15:02.345 "supported_io_types": { 00:15:02.345 "read": true, 00:15:02.345 "write": true, 00:15:02.345 "unmap": false, 00:15:02.345 "flush": false, 00:15:02.345 "reset": true, 00:15:02.345 "nvme_admin": false, 00:15:02.345 "nvme_io": false, 00:15:02.345 "nvme_io_md": false, 00:15:02.345 "write_zeroes": true, 00:15:02.345 "zcopy": false, 00:15:02.345 "get_zone_info": false, 00:15:02.345 "zone_management": false, 00:15:02.345 "zone_append": false, 00:15:02.345 "compare": false, 00:15:02.345 "compare_and_write": false, 00:15:02.345 "abort": false, 00:15:02.345 "seek_hole": false, 00:15:02.345 "seek_data": false, 00:15:02.345 "copy": false, 00:15:02.345 "nvme_iov_md": false 00:15:02.345 }, 00:15:02.345 "driver_specific": { 00:15:02.345 "raid": { 00:15:02.345 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:02.345 "strip_size_kb": 64, 00:15:02.345 "state": "online", 00:15:02.345 "raid_level": "raid5f", 00:15:02.345 "superblock": true, 00:15:02.345 "num_base_bdevs": 3, 00:15:02.345 "num_base_bdevs_discovered": 3, 00:15:02.345 "num_base_bdevs_operational": 3, 00:15:02.345 "base_bdevs_list": [ 00:15:02.345 { 00:15:02.345 "name": "pt1", 00:15:02.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.345 "is_configured": true, 00:15:02.345 "data_offset": 2048, 00:15:02.345 "data_size": 63488 00:15:02.345 }, 00:15:02.345 { 00:15:02.345 "name": "pt2", 00:15:02.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.345 "is_configured": true, 00:15:02.345 "data_offset": 2048, 00:15:02.345 "data_size": 63488 00:15:02.345 }, 00:15:02.345 { 00:15:02.345 "name": "pt3", 00:15:02.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.345 "is_configured": true, 00:15:02.345 "data_offset": 2048, 00:15:02.345 "data_size": 63488 00:15:02.345 } 00:15:02.345 ] 00:15:02.345 } 00:15:02.345 } 00:15:02.345 }' 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:02.345 pt2 00:15:02.345 pt3' 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.345 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.346 [2024-11-26 20:27:55.863981] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.346 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fcb1ad1f-231a-41fd-9fca-bf65f2ade625 '!=' fcb1ad1f-231a-41fd-9fca-bf65f2ade625 ']' 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.605 [2024-11-26 20:27:55.911775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.605 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.605 "name": "raid_bdev1", 00:15:02.605 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:02.605 "strip_size_kb": 64, 00:15:02.605 "state": "online", 00:15:02.605 "raid_level": "raid5f", 00:15:02.605 "superblock": true, 00:15:02.605 "num_base_bdevs": 3, 00:15:02.605 "num_base_bdevs_discovered": 2, 00:15:02.605 "num_base_bdevs_operational": 2, 00:15:02.606 "base_bdevs_list": [ 00:15:02.606 { 00:15:02.606 "name": null, 00:15:02.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.606 "is_configured": false, 00:15:02.606 "data_offset": 0, 00:15:02.606 "data_size": 63488 00:15:02.606 }, 00:15:02.606 { 00:15:02.606 "name": "pt2", 00:15:02.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.606 "is_configured": true, 00:15:02.606 "data_offset": 2048, 00:15:02.606 "data_size": 63488 00:15:02.606 }, 00:15:02.606 { 00:15:02.606 "name": "pt3", 00:15:02.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.606 "is_configured": true, 00:15:02.606 "data_offset": 2048, 00:15:02.606 "data_size": 63488 00:15:02.606 } 00:15:02.606 ] 00:15:02.606 }' 00:15:02.606 20:27:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.606 20:27:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.865 [2024-11-26 20:27:56.358915] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.865 [2024-11-26 20:27:56.358955] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.865 [2024-11-26 20:27:56.359034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.865 [2024-11-26 20:27:56.359098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.865 [2024-11-26 20:27:56.359109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.865 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 [2024-11-26 20:27:56.442788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.126 [2024-11-26 20:27:56.442852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.126 [2024-11-26 20:27:56.442871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:03.126 [2024-11-26 20:27:56.442881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.126 [2024-11-26 20:27:56.445274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.126 [2024-11-26 20:27:56.445315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.126 [2024-11-26 20:27:56.445391] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.126 [2024-11-26 20:27:56.445428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.126 pt2 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.126 "name": "raid_bdev1", 00:15:03.126 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:03.126 "strip_size_kb": 64, 00:15:03.126 "state": "configuring", 00:15:03.126 "raid_level": "raid5f", 00:15:03.126 "superblock": true, 00:15:03.126 "num_base_bdevs": 3, 00:15:03.126 "num_base_bdevs_discovered": 1, 00:15:03.126 "num_base_bdevs_operational": 2, 00:15:03.126 "base_bdevs_list": [ 00:15:03.126 { 00:15:03.126 "name": null, 00:15:03.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.126 "is_configured": false, 00:15:03.126 "data_offset": 2048, 00:15:03.126 "data_size": 63488 00:15:03.126 }, 00:15:03.126 { 00:15:03.126 "name": "pt2", 00:15:03.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.126 "is_configured": true, 00:15:03.126 "data_offset": 2048, 00:15:03.126 "data_size": 63488 00:15:03.126 }, 00:15:03.126 { 00:15:03.126 "name": null, 00:15:03.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.126 "is_configured": false, 00:15:03.126 "data_offset": 2048, 00:15:03.126 "data_size": 63488 00:15:03.126 } 00:15:03.126 ] 00:15:03.126 }' 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.126 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.385 [2024-11-26 20:27:56.886072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:03.385 [2024-11-26 20:27:56.886160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.385 [2024-11-26 20:27:56.886188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:03.385 [2024-11-26 20:27:56.886200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.385 [2024-11-26 20:27:56.886647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.385 [2024-11-26 20:27:56.886673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:03.385 [2024-11-26 20:27:56.886758] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:03.385 [2024-11-26 20:27:56.886794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:03.385 [2024-11-26 20:27:56.886910] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:03.385 [2024-11-26 20:27:56.886926] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:03.385 [2024-11-26 20:27:56.887179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:03.385 [2024-11-26 20:27:56.887715] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:03.385 [2024-11-26 20:27:56.887738] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:03.385 [2024-11-26 20:27:56.887992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.385 pt3 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.385 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.645 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.645 "name": "raid_bdev1", 00:15:03.645 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:03.645 "strip_size_kb": 64, 00:15:03.645 "state": "online", 00:15:03.645 "raid_level": "raid5f", 00:15:03.645 "superblock": true, 00:15:03.645 "num_base_bdevs": 3, 00:15:03.645 "num_base_bdevs_discovered": 2, 00:15:03.645 "num_base_bdevs_operational": 2, 00:15:03.645 "base_bdevs_list": [ 00:15:03.645 { 00:15:03.645 "name": null, 00:15:03.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.645 "is_configured": false, 00:15:03.645 "data_offset": 2048, 00:15:03.645 "data_size": 63488 00:15:03.645 }, 00:15:03.645 { 00:15:03.645 "name": "pt2", 00:15:03.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.645 "is_configured": true, 00:15:03.645 "data_offset": 2048, 00:15:03.645 "data_size": 63488 00:15:03.645 }, 00:15:03.645 { 00:15:03.645 "name": "pt3", 00:15:03.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.646 "is_configured": true, 00:15:03.646 "data_offset": 2048, 00:15:03.646 "data_size": 63488 00:15:03.646 } 00:15:03.646 ] 00:15:03.646 }' 00:15:03.646 20:27:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.646 20:27:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.905 [2024-11-26 20:27:57.361271] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.905 [2024-11-26 20:27:57.361319] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.905 [2024-11-26 20:27:57.361415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.905 [2024-11-26 20:27:57.361487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.905 [2024-11-26 20:27:57.361502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.905 [2024-11-26 20:27:57.425161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.905 [2024-11-26 20:27:57.425251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.905 [2024-11-26 20:27:57.425275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:03.905 [2024-11-26 20:27:57.425289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.905 [2024-11-26 20:27:57.428032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.905 [2024-11-26 20:27:57.428080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.905 [2024-11-26 20:27:57.428174] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:03.905 [2024-11-26 20:27:57.428227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:03.905 [2024-11-26 20:27:57.428365] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:03.905 [2024-11-26 20:27:57.428395] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.905 [2024-11-26 20:27:57.428420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:03.905 [2024-11-26 20:27:57.428468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.905 pt1 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.905 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.165 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.165 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.165 "name": "raid_bdev1", 00:15:04.165 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:04.165 "strip_size_kb": 64, 00:15:04.165 "state": "configuring", 00:15:04.165 "raid_level": "raid5f", 00:15:04.165 "superblock": true, 00:15:04.165 "num_base_bdevs": 3, 00:15:04.165 "num_base_bdevs_discovered": 1, 00:15:04.165 "num_base_bdevs_operational": 2, 00:15:04.165 "base_bdevs_list": [ 00:15:04.165 { 00:15:04.165 "name": null, 00:15:04.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.165 "is_configured": false, 00:15:04.165 "data_offset": 2048, 00:15:04.165 "data_size": 63488 00:15:04.165 }, 00:15:04.165 { 00:15:04.165 "name": "pt2", 00:15:04.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.165 "is_configured": true, 00:15:04.165 "data_offset": 2048, 00:15:04.165 "data_size": 63488 00:15:04.165 }, 00:15:04.165 { 00:15:04.165 "name": null, 00:15:04.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.165 "is_configured": false, 00:15:04.165 "data_offset": 2048, 00:15:04.165 "data_size": 63488 00:15:04.165 } 00:15:04.165 ] 00:15:04.165 }' 00:15:04.165 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.165 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.424 [2024-11-26 20:27:57.912454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:04.424 [2024-11-26 20:27:57.912532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.424 [2024-11-26 20:27:57.912552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:04.424 [2024-11-26 20:27:57.912565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.424 [2024-11-26 20:27:57.913104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.424 [2024-11-26 20:27:57.913145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:04.424 [2024-11-26 20:27:57.913236] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:04.424 [2024-11-26 20:27:57.913274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:04.424 [2024-11-26 20:27:57.913383] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:04.424 [2024-11-26 20:27:57.913405] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:04.424 [2024-11-26 20:27:57.913700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:04.424 [2024-11-26 20:27:57.914289] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:04.424 [2024-11-26 20:27:57.914311] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:04.424 [2024-11-26 20:27:57.914511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.424 pt3 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.424 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.424 "name": "raid_bdev1", 00:15:04.424 "uuid": "fcb1ad1f-231a-41fd-9fca-bf65f2ade625", 00:15:04.424 "strip_size_kb": 64, 00:15:04.424 "state": "online", 00:15:04.424 "raid_level": "raid5f", 00:15:04.424 "superblock": true, 00:15:04.424 "num_base_bdevs": 3, 00:15:04.424 "num_base_bdevs_discovered": 2, 00:15:04.424 "num_base_bdevs_operational": 2, 00:15:04.424 "base_bdevs_list": [ 00:15:04.424 { 00:15:04.424 "name": null, 00:15:04.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.424 "is_configured": false, 00:15:04.424 "data_offset": 2048, 00:15:04.424 "data_size": 63488 00:15:04.424 }, 00:15:04.424 { 00:15:04.424 "name": "pt2", 00:15:04.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.425 "is_configured": true, 00:15:04.425 "data_offset": 2048, 00:15:04.425 "data_size": 63488 00:15:04.425 }, 00:15:04.425 { 00:15:04.425 "name": "pt3", 00:15:04.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.425 "is_configured": true, 00:15:04.425 "data_offset": 2048, 00:15:04.425 "data_size": 63488 00:15:04.425 } 00:15:04.425 ] 00:15:04.425 }' 00:15:04.425 20:27:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.425 20:27:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.993 [2024-11-26 20:27:58.447860] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fcb1ad1f-231a-41fd-9fca-bf65f2ade625 '!=' fcb1ad1f-231a-41fd-9fca-bf65f2ade625 ']' 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 92242 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 92242 ']' 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 92242 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92242 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:04.993 killing process with pid 92242 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92242' 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 92242 00:15:04.993 [2024-11-26 20:27:58.515637] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.993 [2024-11-26 20:27:58.515752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.993 [2024-11-26 20:27:58.515828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.993 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 92242 00:15:04.993 [2024-11-26 20:27:58.515844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:05.251 [2024-11-26 20:27:58.570581] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.511 20:27:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:05.511 00:15:05.511 real 0m6.699s 00:15:05.511 user 0m11.034s 00:15:05.511 sys 0m1.487s 00:15:05.511 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.511 20:27:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.511 ************************************ 00:15:05.511 END TEST raid5f_superblock_test 00:15:05.511 ************************************ 00:15:05.511 20:27:58 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:05.511 20:27:58 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:05.511 20:27:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:05.511 20:27:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.511 20:27:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.511 ************************************ 00:15:05.511 START TEST raid5f_rebuild_test 00:15:05.511 ************************************ 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:05.511 20:27:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92676 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92676 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92676 ']' 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:05.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:05.511 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.774 [2024-11-26 20:27:59.089488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:05.774 [2024-11-26 20:27:59.089655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92676 ] 00:15:05.774 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:05.774 Zero copy mechanism will not be used. 00:15:05.774 [2024-11-26 20:27:59.253944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.040 [2024-11-26 20:27:59.336233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.040 [2024-11-26 20:27:59.409600] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.040 [2024-11-26 20:27:59.409657] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.610 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.610 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:06.610 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:06.610 20:27:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:06.610 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:27:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 BaseBdev1_malloc 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 [2024-11-26 20:28:00.009542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:06.610 [2024-11-26 20:28:00.009633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.610 [2024-11-26 20:28:00.009664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:06.610 [2024-11-26 20:28:00.009680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.610 [2024-11-26 20:28:00.012032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.610 [2024-11-26 20:28:00.012077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:06.610 BaseBdev1 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 BaseBdev2_malloc 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 [2024-11-26 20:28:00.055910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:06.610 [2024-11-26 20:28:00.055978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.610 [2024-11-26 20:28:00.056002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:06.610 [2024-11-26 20:28:00.056012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.610 [2024-11-26 20:28:00.058303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.610 [2024-11-26 20:28:00.058345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:06.610 BaseBdev2 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 BaseBdev3_malloc 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 [2024-11-26 20:28:00.090372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:06.610 [2024-11-26 20:28:00.090435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.610 [2024-11-26 20:28:00.090462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:06.610 [2024-11-26 20:28:00.090472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.610 [2024-11-26 20:28:00.092697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.610 [2024-11-26 20:28:00.092733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:06.610 BaseBdev3 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 spare_malloc 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 spare_delay 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 [2024-11-26 20:28:00.134225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:06.610 [2024-11-26 20:28:00.134313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.610 [2024-11-26 20:28:00.134341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:06.610 [2024-11-26 20:28:00.134352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.610 [2024-11-26 20:28:00.136896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.610 [2024-11-26 20:28:00.136961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:06.610 spare 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.610 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 [2024-11-26 20:28:00.146261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.610 [2024-11-26 20:28:00.148413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.610 [2024-11-26 20:28:00.148495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.610 [2024-11-26 20:28:00.148588] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:06.610 [2024-11-26 20:28:00.148600] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:06.610 [2024-11-26 20:28:00.148963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:06.610 [2024-11-26 20:28:00.149483] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:06.611 [2024-11-26 20:28:00.149507] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:06.611 [2024-11-26 20:28:00.149681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.611 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.899 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.899 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.899 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.899 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.899 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.899 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.899 "name": "raid_bdev1", 00:15:06.899 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:06.899 "strip_size_kb": 64, 00:15:06.899 "state": "online", 00:15:06.899 "raid_level": "raid5f", 00:15:06.899 "superblock": false, 00:15:06.899 "num_base_bdevs": 3, 00:15:06.899 "num_base_bdevs_discovered": 3, 00:15:06.899 "num_base_bdevs_operational": 3, 00:15:06.899 "base_bdevs_list": [ 00:15:06.899 { 00:15:06.899 "name": "BaseBdev1", 00:15:06.899 "uuid": "ef8b2778-047f-5855-bd9a-c455a4599fb1", 00:15:06.899 "is_configured": true, 00:15:06.899 "data_offset": 0, 00:15:06.899 "data_size": 65536 00:15:06.899 }, 00:15:06.899 { 00:15:06.899 "name": "BaseBdev2", 00:15:06.899 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:06.899 "is_configured": true, 00:15:06.899 "data_offset": 0, 00:15:06.899 "data_size": 65536 00:15:06.899 }, 00:15:06.899 { 00:15:06.899 "name": "BaseBdev3", 00:15:06.899 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:06.899 "is_configured": true, 00:15:06.899 "data_offset": 0, 00:15:06.899 "data_size": 65536 00:15:06.899 } 00:15:06.899 ] 00:15:06.899 }' 00:15:06.899 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.899 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:07.173 [2024-11-26 20:28:00.619486] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.173 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:07.432 [2024-11-26 20:28:00.950790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:07.432 /dev/nbd0 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:07.691 20:28:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.691 1+0 records in 00:15:07.691 1+0 records out 00:15:07.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412516 s, 9.9 MB/s 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:07.691 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:07.952 512+0 records in 00:15:07.952 512+0 records out 00:15:07.952 67108864 bytes (67 MB, 64 MiB) copied, 0.353481 s, 190 MB/s 00:15:07.952 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:07.952 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.952 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:07.952 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.952 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:07.952 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.952 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:08.214 [2024-11-26 20:28:01.619943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.214 [2024-11-26 20:28:01.636066] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.214 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.215 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.215 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.215 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.215 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.215 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.215 "name": "raid_bdev1", 00:15:08.215 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:08.215 "strip_size_kb": 64, 00:15:08.215 "state": "online", 00:15:08.215 "raid_level": "raid5f", 00:15:08.215 "superblock": false, 00:15:08.215 "num_base_bdevs": 3, 00:15:08.215 "num_base_bdevs_discovered": 2, 00:15:08.215 "num_base_bdevs_operational": 2, 00:15:08.215 "base_bdevs_list": [ 00:15:08.215 { 00:15:08.215 "name": null, 00:15:08.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.215 "is_configured": false, 00:15:08.215 "data_offset": 0, 00:15:08.215 "data_size": 65536 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "name": "BaseBdev2", 00:15:08.215 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:08.215 "is_configured": true, 00:15:08.215 "data_offset": 0, 00:15:08.215 "data_size": 65536 00:15:08.215 }, 00:15:08.215 { 00:15:08.215 "name": "BaseBdev3", 00:15:08.215 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:08.215 "is_configured": true, 00:15:08.215 "data_offset": 0, 00:15:08.215 "data_size": 65536 00:15:08.215 } 00:15:08.215 ] 00:15:08.215 }' 00:15:08.215 20:28:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.215 20:28:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.782 20:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:08.782 20:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.782 20:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.782 [2024-11-26 20:28:02.095367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.782 [2024-11-26 20:28:02.102315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:15:08.782 20:28:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.782 20:28:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:08.782 [2024-11-26 20:28:02.104812] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.719 "name": "raid_bdev1", 00:15:09.719 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:09.719 "strip_size_kb": 64, 00:15:09.719 "state": "online", 00:15:09.719 "raid_level": "raid5f", 00:15:09.719 "superblock": false, 00:15:09.719 "num_base_bdevs": 3, 00:15:09.719 "num_base_bdevs_discovered": 3, 00:15:09.719 "num_base_bdevs_operational": 3, 00:15:09.719 "process": { 00:15:09.719 "type": "rebuild", 00:15:09.719 "target": "spare", 00:15:09.719 "progress": { 00:15:09.719 "blocks": 20480, 00:15:09.719 "percent": 15 00:15:09.719 } 00:15:09.719 }, 00:15:09.719 "base_bdevs_list": [ 00:15:09.719 { 00:15:09.719 "name": "spare", 00:15:09.719 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:09.719 "is_configured": true, 00:15:09.719 "data_offset": 0, 00:15:09.719 "data_size": 65536 00:15:09.719 }, 00:15:09.719 { 00:15:09.719 "name": "BaseBdev2", 00:15:09.719 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:09.719 "is_configured": true, 00:15:09.719 "data_offset": 0, 00:15:09.719 "data_size": 65536 00:15:09.719 }, 00:15:09.719 { 00:15:09.719 "name": "BaseBdev3", 00:15:09.719 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:09.719 "is_configured": true, 00:15:09.719 "data_offset": 0, 00:15:09.719 "data_size": 65536 00:15:09.719 } 00:15:09.719 ] 00:15:09.719 }' 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.719 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.719 [2024-11-26 20:28:03.265833] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.978 [2024-11-26 20:28:03.319008] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:09.978 [2024-11-26 20:28:03.319093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.978 [2024-11-26 20:28:03.319114] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.978 [2024-11-26 20:28:03.319126] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.978 "name": "raid_bdev1", 00:15:09.978 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:09.978 "strip_size_kb": 64, 00:15:09.978 "state": "online", 00:15:09.978 "raid_level": "raid5f", 00:15:09.978 "superblock": false, 00:15:09.978 "num_base_bdevs": 3, 00:15:09.978 "num_base_bdevs_discovered": 2, 00:15:09.978 "num_base_bdevs_operational": 2, 00:15:09.978 "base_bdevs_list": [ 00:15:09.978 { 00:15:09.978 "name": null, 00:15:09.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.978 "is_configured": false, 00:15:09.978 "data_offset": 0, 00:15:09.978 "data_size": 65536 00:15:09.978 }, 00:15:09.978 { 00:15:09.978 "name": "BaseBdev2", 00:15:09.978 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:09.978 "is_configured": true, 00:15:09.978 "data_offset": 0, 00:15:09.978 "data_size": 65536 00:15:09.978 }, 00:15:09.978 { 00:15:09.978 "name": "BaseBdev3", 00:15:09.978 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:09.978 "is_configured": true, 00:15:09.978 "data_offset": 0, 00:15:09.978 "data_size": 65536 00:15:09.978 } 00:15:09.978 ] 00:15:09.978 }' 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.978 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.237 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.496 "name": "raid_bdev1", 00:15:10.496 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:10.496 "strip_size_kb": 64, 00:15:10.496 "state": "online", 00:15:10.496 "raid_level": "raid5f", 00:15:10.496 "superblock": false, 00:15:10.496 "num_base_bdevs": 3, 00:15:10.496 "num_base_bdevs_discovered": 2, 00:15:10.496 "num_base_bdevs_operational": 2, 00:15:10.496 "base_bdevs_list": [ 00:15:10.496 { 00:15:10.496 "name": null, 00:15:10.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.496 "is_configured": false, 00:15:10.496 "data_offset": 0, 00:15:10.496 "data_size": 65536 00:15:10.496 }, 00:15:10.496 { 00:15:10.496 "name": "BaseBdev2", 00:15:10.496 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:10.496 "is_configured": true, 00:15:10.496 "data_offset": 0, 00:15:10.496 "data_size": 65536 00:15:10.496 }, 00:15:10.496 { 00:15:10.496 "name": "BaseBdev3", 00:15:10.496 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:10.496 "is_configured": true, 00:15:10.496 "data_offset": 0, 00:15:10.496 "data_size": 65536 00:15:10.496 } 00:15:10.496 ] 00:15:10.496 }' 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.496 [2024-11-26 20:28:03.904511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.496 [2024-11-26 20:28:03.911275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.496 20:28:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:10.496 [2024-11-26 20:28:03.913826] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.430 "name": "raid_bdev1", 00:15:11.430 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:11.430 "strip_size_kb": 64, 00:15:11.430 "state": "online", 00:15:11.430 "raid_level": "raid5f", 00:15:11.430 "superblock": false, 00:15:11.430 "num_base_bdevs": 3, 00:15:11.430 "num_base_bdevs_discovered": 3, 00:15:11.430 "num_base_bdevs_operational": 3, 00:15:11.430 "process": { 00:15:11.430 "type": "rebuild", 00:15:11.430 "target": "spare", 00:15:11.430 "progress": { 00:15:11.430 "blocks": 20480, 00:15:11.430 "percent": 15 00:15:11.430 } 00:15:11.430 }, 00:15:11.430 "base_bdevs_list": [ 00:15:11.430 { 00:15:11.430 "name": "spare", 00:15:11.430 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:11.430 "is_configured": true, 00:15:11.430 "data_offset": 0, 00:15:11.430 "data_size": 65536 00:15:11.430 }, 00:15:11.430 { 00:15:11.430 "name": "BaseBdev2", 00:15:11.430 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:11.430 "is_configured": true, 00:15:11.430 "data_offset": 0, 00:15:11.430 "data_size": 65536 00:15:11.430 }, 00:15:11.430 { 00:15:11.430 "name": "BaseBdev3", 00:15:11.430 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:11.430 "is_configured": true, 00:15:11.430 "data_offset": 0, 00:15:11.430 "data_size": 65536 00:15:11.430 } 00:15:11.430 ] 00:15:11.430 }' 00:15:11.430 20:28:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.689 "name": "raid_bdev1", 00:15:11.689 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:11.689 "strip_size_kb": 64, 00:15:11.689 "state": "online", 00:15:11.689 "raid_level": "raid5f", 00:15:11.689 "superblock": false, 00:15:11.689 "num_base_bdevs": 3, 00:15:11.689 "num_base_bdevs_discovered": 3, 00:15:11.689 "num_base_bdevs_operational": 3, 00:15:11.689 "process": { 00:15:11.689 "type": "rebuild", 00:15:11.689 "target": "spare", 00:15:11.689 "progress": { 00:15:11.689 "blocks": 22528, 00:15:11.689 "percent": 17 00:15:11.689 } 00:15:11.689 }, 00:15:11.689 "base_bdevs_list": [ 00:15:11.689 { 00:15:11.689 "name": "spare", 00:15:11.689 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:11.689 "is_configured": true, 00:15:11.689 "data_offset": 0, 00:15:11.689 "data_size": 65536 00:15:11.689 }, 00:15:11.689 { 00:15:11.689 "name": "BaseBdev2", 00:15:11.689 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:11.689 "is_configured": true, 00:15:11.689 "data_offset": 0, 00:15:11.689 "data_size": 65536 00:15:11.689 }, 00:15:11.689 { 00:15:11.689 "name": "BaseBdev3", 00:15:11.689 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:11.689 "is_configured": true, 00:15:11.689 "data_offset": 0, 00:15:11.689 "data_size": 65536 00:15:11.689 } 00:15:11.689 ] 00:15:11.689 }' 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.689 20:28:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.068 20:28:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.069 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.069 "name": "raid_bdev1", 00:15:13.069 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:13.069 "strip_size_kb": 64, 00:15:13.069 "state": "online", 00:15:13.069 "raid_level": "raid5f", 00:15:13.069 "superblock": false, 00:15:13.069 "num_base_bdevs": 3, 00:15:13.069 "num_base_bdevs_discovered": 3, 00:15:13.069 "num_base_bdevs_operational": 3, 00:15:13.069 "process": { 00:15:13.069 "type": "rebuild", 00:15:13.069 "target": "spare", 00:15:13.069 "progress": { 00:15:13.069 "blocks": 45056, 00:15:13.069 "percent": 34 00:15:13.069 } 00:15:13.069 }, 00:15:13.069 "base_bdevs_list": [ 00:15:13.069 { 00:15:13.069 "name": "spare", 00:15:13.069 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:13.069 "is_configured": true, 00:15:13.069 "data_offset": 0, 00:15:13.069 "data_size": 65536 00:15:13.069 }, 00:15:13.069 { 00:15:13.069 "name": "BaseBdev2", 00:15:13.069 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:13.069 "is_configured": true, 00:15:13.069 "data_offset": 0, 00:15:13.069 "data_size": 65536 00:15:13.069 }, 00:15:13.069 { 00:15:13.069 "name": "BaseBdev3", 00:15:13.069 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:13.069 "is_configured": true, 00:15:13.069 "data_offset": 0, 00:15:13.069 "data_size": 65536 00:15:13.069 } 00:15:13.069 ] 00:15:13.069 }' 00:15:13.069 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.069 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.069 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.069 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.069 20:28:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.036 "name": "raid_bdev1", 00:15:14.036 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:14.036 "strip_size_kb": 64, 00:15:14.036 "state": "online", 00:15:14.036 "raid_level": "raid5f", 00:15:14.036 "superblock": false, 00:15:14.036 "num_base_bdevs": 3, 00:15:14.036 "num_base_bdevs_discovered": 3, 00:15:14.036 "num_base_bdevs_operational": 3, 00:15:14.036 "process": { 00:15:14.036 "type": "rebuild", 00:15:14.036 "target": "spare", 00:15:14.036 "progress": { 00:15:14.036 "blocks": 69632, 00:15:14.036 "percent": 53 00:15:14.036 } 00:15:14.036 }, 00:15:14.036 "base_bdevs_list": [ 00:15:14.036 { 00:15:14.036 "name": "spare", 00:15:14.036 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:14.036 "is_configured": true, 00:15:14.036 "data_offset": 0, 00:15:14.036 "data_size": 65536 00:15:14.036 }, 00:15:14.036 { 00:15:14.036 "name": "BaseBdev2", 00:15:14.036 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:14.036 "is_configured": true, 00:15:14.036 "data_offset": 0, 00:15:14.036 "data_size": 65536 00:15:14.036 }, 00:15:14.036 { 00:15:14.036 "name": "BaseBdev3", 00:15:14.036 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:14.036 "is_configured": true, 00:15:14.036 "data_offset": 0, 00:15:14.036 "data_size": 65536 00:15:14.036 } 00:15:14.036 ] 00:15:14.036 }' 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.036 20:28:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.979 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.979 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.979 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.979 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.980 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.980 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.980 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.980 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.980 20:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.980 20:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.980 20:28:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.238 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.238 "name": "raid_bdev1", 00:15:15.238 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:15.238 "strip_size_kb": 64, 00:15:15.238 "state": "online", 00:15:15.238 "raid_level": "raid5f", 00:15:15.238 "superblock": false, 00:15:15.238 "num_base_bdevs": 3, 00:15:15.238 "num_base_bdevs_discovered": 3, 00:15:15.238 "num_base_bdevs_operational": 3, 00:15:15.238 "process": { 00:15:15.238 "type": "rebuild", 00:15:15.238 "target": "spare", 00:15:15.238 "progress": { 00:15:15.238 "blocks": 92160, 00:15:15.238 "percent": 70 00:15:15.238 } 00:15:15.238 }, 00:15:15.238 "base_bdevs_list": [ 00:15:15.238 { 00:15:15.238 "name": "spare", 00:15:15.238 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:15.238 "is_configured": true, 00:15:15.238 "data_offset": 0, 00:15:15.239 "data_size": 65536 00:15:15.239 }, 00:15:15.239 { 00:15:15.239 "name": "BaseBdev2", 00:15:15.239 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:15.239 "is_configured": true, 00:15:15.239 "data_offset": 0, 00:15:15.239 "data_size": 65536 00:15:15.239 }, 00:15:15.239 { 00:15:15.239 "name": "BaseBdev3", 00:15:15.239 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:15.239 "is_configured": true, 00:15:15.239 "data_offset": 0, 00:15:15.239 "data_size": 65536 00:15:15.239 } 00:15:15.239 ] 00:15:15.239 }' 00:15:15.239 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.239 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.239 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.239 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.239 20:28:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.174 "name": "raid_bdev1", 00:15:16.174 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:16.174 "strip_size_kb": 64, 00:15:16.174 "state": "online", 00:15:16.174 "raid_level": "raid5f", 00:15:16.174 "superblock": false, 00:15:16.174 "num_base_bdevs": 3, 00:15:16.174 "num_base_bdevs_discovered": 3, 00:15:16.174 "num_base_bdevs_operational": 3, 00:15:16.174 "process": { 00:15:16.174 "type": "rebuild", 00:15:16.174 "target": "spare", 00:15:16.174 "progress": { 00:15:16.174 "blocks": 114688, 00:15:16.174 "percent": 87 00:15:16.174 } 00:15:16.174 }, 00:15:16.174 "base_bdevs_list": [ 00:15:16.174 { 00:15:16.174 "name": "spare", 00:15:16.174 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:16.174 "is_configured": true, 00:15:16.174 "data_offset": 0, 00:15:16.174 "data_size": 65536 00:15:16.174 }, 00:15:16.174 { 00:15:16.174 "name": "BaseBdev2", 00:15:16.174 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:16.174 "is_configured": true, 00:15:16.174 "data_offset": 0, 00:15:16.174 "data_size": 65536 00:15:16.174 }, 00:15:16.174 { 00:15:16.174 "name": "BaseBdev3", 00:15:16.174 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:16.174 "is_configured": true, 00:15:16.174 "data_offset": 0, 00:15:16.174 "data_size": 65536 00:15:16.174 } 00:15:16.174 ] 00:15:16.174 }' 00:15:16.174 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.433 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.433 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.433 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.433 20:28:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.000 [2024-11-26 20:28:10.382964] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:17.000 [2024-11-26 20:28:10.383065] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:17.000 [2024-11-26 20:28:10.383119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.259 "name": "raid_bdev1", 00:15:17.259 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:17.259 "strip_size_kb": 64, 00:15:17.259 "state": "online", 00:15:17.259 "raid_level": "raid5f", 00:15:17.259 "superblock": false, 00:15:17.259 "num_base_bdevs": 3, 00:15:17.259 "num_base_bdevs_discovered": 3, 00:15:17.259 "num_base_bdevs_operational": 3, 00:15:17.259 "base_bdevs_list": [ 00:15:17.259 { 00:15:17.259 "name": "spare", 00:15:17.259 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:17.259 "is_configured": true, 00:15:17.259 "data_offset": 0, 00:15:17.259 "data_size": 65536 00:15:17.259 }, 00:15:17.259 { 00:15:17.259 "name": "BaseBdev2", 00:15:17.259 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:17.259 "is_configured": true, 00:15:17.259 "data_offset": 0, 00:15:17.259 "data_size": 65536 00:15:17.259 }, 00:15:17.259 { 00:15:17.259 "name": "BaseBdev3", 00:15:17.259 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:17.259 "is_configured": true, 00:15:17.259 "data_offset": 0, 00:15:17.259 "data_size": 65536 00:15:17.259 } 00:15:17.259 ] 00:15:17.259 }' 00:15:17.259 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.518 "name": "raid_bdev1", 00:15:17.518 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:17.518 "strip_size_kb": 64, 00:15:17.518 "state": "online", 00:15:17.518 "raid_level": "raid5f", 00:15:17.518 "superblock": false, 00:15:17.518 "num_base_bdevs": 3, 00:15:17.518 "num_base_bdevs_discovered": 3, 00:15:17.518 "num_base_bdevs_operational": 3, 00:15:17.518 "base_bdevs_list": [ 00:15:17.518 { 00:15:17.518 "name": "spare", 00:15:17.518 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:17.518 "is_configured": true, 00:15:17.518 "data_offset": 0, 00:15:17.518 "data_size": 65536 00:15:17.518 }, 00:15:17.518 { 00:15:17.518 "name": "BaseBdev2", 00:15:17.518 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:17.518 "is_configured": true, 00:15:17.518 "data_offset": 0, 00:15:17.518 "data_size": 65536 00:15:17.518 }, 00:15:17.518 { 00:15:17.518 "name": "BaseBdev3", 00:15:17.518 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:17.518 "is_configured": true, 00:15:17.518 "data_offset": 0, 00:15:17.518 "data_size": 65536 00:15:17.518 } 00:15:17.518 ] 00:15:17.518 }' 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.518 20:28:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.518 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.519 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.519 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.519 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.519 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.519 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.778 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.778 "name": "raid_bdev1", 00:15:17.778 "uuid": "a899ff94-83d7-4c20-a8e4-068ce36b687f", 00:15:17.778 "strip_size_kb": 64, 00:15:17.778 "state": "online", 00:15:17.778 "raid_level": "raid5f", 00:15:17.778 "superblock": false, 00:15:17.778 "num_base_bdevs": 3, 00:15:17.778 "num_base_bdevs_discovered": 3, 00:15:17.778 "num_base_bdevs_operational": 3, 00:15:17.778 "base_bdevs_list": [ 00:15:17.778 { 00:15:17.778 "name": "spare", 00:15:17.778 "uuid": "2d4be2c6-96c5-5214-9a63-3401ae4dc1f6", 00:15:17.778 "is_configured": true, 00:15:17.778 "data_offset": 0, 00:15:17.778 "data_size": 65536 00:15:17.778 }, 00:15:17.778 { 00:15:17.778 "name": "BaseBdev2", 00:15:17.778 "uuid": "3374b208-2a97-559f-b724-d9a2e5315663", 00:15:17.778 "is_configured": true, 00:15:17.778 "data_offset": 0, 00:15:17.778 "data_size": 65536 00:15:17.778 }, 00:15:17.778 { 00:15:17.778 "name": "BaseBdev3", 00:15:17.778 "uuid": "5b70fc3f-bbba-5bb9-97cc-3950f662edc8", 00:15:17.778 "is_configured": true, 00:15:17.778 "data_offset": 0, 00:15:17.778 "data_size": 65536 00:15:17.778 } 00:15:17.778 ] 00:15:17.778 }' 00:15:17.778 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.778 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.037 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.037 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.037 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.037 [2024-11-26 20:28:11.510487] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.037 [2024-11-26 20:28:11.510530] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.038 [2024-11-26 20:28:11.510652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.038 [2024-11-26 20:28:11.510758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.038 [2024-11-26 20:28:11.510774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.038 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:18.297 /dev/nbd0 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.297 1+0 records in 00:15:18.297 1+0 records out 00:15:18.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384023 s, 10.7 MB/s 00:15:18.297 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.591 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:18.591 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.591 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:18.591 20:28:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:18.591 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.591 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.591 20:28:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:18.591 /dev/nbd1 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.591 1+0 records in 00:15:18.591 1+0 records out 00:15:18.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396567 s, 10.3 MB/s 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.591 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:18.851 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:18.851 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.851 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:18.851 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.851 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:18.851 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.851 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.111 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92676 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92676 ']' 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92676 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92676 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92676' 00:15:19.371 killing process with pid 92676 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92676 00:15:19.371 Received shutdown signal, test time was about 60.000000 seconds 00:15:19.371 00:15:19.371 Latency(us) 00:15:19.371 [2024-11-26T20:28:12.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.371 [2024-11-26T20:28:12.923Z] =================================================================================================================== 00:15:19.371 [2024-11-26T20:28:12.923Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:19.371 [2024-11-26 20:28:12.734448] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.371 20:28:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92676 00:15:19.371 [2024-11-26 20:28:12.799424] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.630 20:28:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:19.630 00:15:19.630 real 0m14.147s 00:15:19.630 user 0m17.791s 00:15:19.630 sys 0m2.087s 00:15:19.630 20:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.630 20:28:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 ************************************ 00:15:19.630 END TEST raid5f_rebuild_test 00:15:19.630 ************************************ 00:15:19.889 20:28:13 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:19.889 20:28:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:19.889 20:28:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.889 20:28:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.889 ************************************ 00:15:19.889 START TEST raid5f_rebuild_test_sb 00:15:19.889 ************************************ 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=93100 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 93100 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93100 ']' 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.889 20:28:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.889 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:19.889 Zero copy mechanism will not be used. 00:15:19.889 [2024-11-26 20:28:13.303747] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:19.889 [2024-11-26 20:28:13.303923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93100 ] 00:15:20.148 [2024-11-26 20:28:13.465878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.148 [2024-11-26 20:28:13.550072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.148 [2024-11-26 20:28:13.622986] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.148 [2024-11-26 20:28:13.623035] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.715 BaseBdev1_malloc 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.715 [2024-11-26 20:28:14.198811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:20.715 [2024-11-26 20:28:14.198890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.715 [2024-11-26 20:28:14.198918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:20.715 [2024-11-26 20:28:14.198934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.715 [2024-11-26 20:28:14.201154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.715 [2024-11-26 20:28:14.201195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.715 BaseBdev1 00:15:20.715 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 BaseBdev2_malloc 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.716 [2024-11-26 20:28:14.245258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:20.716 [2024-11-26 20:28:14.245335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.716 [2024-11-26 20:28:14.245363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:20.716 [2024-11-26 20:28:14.245376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.716 [2024-11-26 20:28:14.248264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.716 [2024-11-26 20:28:14.248310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:20.716 BaseBdev2 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.716 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 BaseBdev3_malloc 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 [2024-11-26 20:28:14.279665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:20.976 [2024-11-26 20:28:14.279722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.976 [2024-11-26 20:28:14.279748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:20.976 [2024-11-26 20:28:14.279758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.976 [2024-11-26 20:28:14.281878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.976 [2024-11-26 20:28:14.281917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:20.976 BaseBdev3 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 spare_malloc 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 spare_delay 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 [2024-11-26 20:28:14.322650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.976 [2024-11-26 20:28:14.322704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.976 [2024-11-26 20:28:14.322729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:20.976 [2024-11-26 20:28:14.322738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.976 [2024-11-26 20:28:14.324898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.976 [2024-11-26 20:28:14.324934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.976 spare 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 [2024-11-26 20:28:14.334708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.976 [2024-11-26 20:28:14.336564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.976 [2024-11-26 20:28:14.336650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.976 [2024-11-26 20:28:14.336806] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:20.976 [2024-11-26 20:28:14.336832] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:20.976 [2024-11-26 20:28:14.337132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:20.976 [2024-11-26 20:28:14.337602] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:20.976 [2024-11-26 20:28:14.337638] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:20.976 [2024-11-26 20:28:14.337769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.976 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.977 "name": "raid_bdev1", 00:15:20.977 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:20.977 "strip_size_kb": 64, 00:15:20.977 "state": "online", 00:15:20.977 "raid_level": "raid5f", 00:15:20.977 "superblock": true, 00:15:20.977 "num_base_bdevs": 3, 00:15:20.977 "num_base_bdevs_discovered": 3, 00:15:20.977 "num_base_bdevs_operational": 3, 00:15:20.977 "base_bdevs_list": [ 00:15:20.977 { 00:15:20.977 "name": "BaseBdev1", 00:15:20.977 "uuid": "9eab2d8f-73ce-51b3-9f26-21e9c096588d", 00:15:20.977 "is_configured": true, 00:15:20.977 "data_offset": 2048, 00:15:20.977 "data_size": 63488 00:15:20.977 }, 00:15:20.977 { 00:15:20.977 "name": "BaseBdev2", 00:15:20.977 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:20.977 "is_configured": true, 00:15:20.977 "data_offset": 2048, 00:15:20.977 "data_size": 63488 00:15:20.977 }, 00:15:20.977 { 00:15:20.977 "name": "BaseBdev3", 00:15:20.977 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:20.977 "is_configured": true, 00:15:20.977 "data_offset": 2048, 00:15:20.977 "data_size": 63488 00:15:20.977 } 00:15:20.977 ] 00:15:20.977 }' 00:15:20.977 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.977 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.546 [2024-11-26 20:28:14.803395] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.546 20:28:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:21.546 [2024-11-26 20:28:15.074793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:21.806 /dev/nbd0 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.806 1+0 records in 00:15:21.806 1+0 records out 00:15:21.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415663 s, 9.9 MB/s 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:21.806 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:22.065 496+0 records in 00:15:22.065 496+0 records out 00:15:22.065 65011712 bytes (65 MB, 62 MiB) copied, 0.314055 s, 207 MB/s 00:15:22.065 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:22.065 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.065 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:22.065 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.066 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:22.066 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.066 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.434 [2024-11-26 20:28:15.690275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.434 [2024-11-26 20:28:15.726328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.434 "name": "raid_bdev1", 00:15:22.434 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:22.434 "strip_size_kb": 64, 00:15:22.434 "state": "online", 00:15:22.434 "raid_level": "raid5f", 00:15:22.434 "superblock": true, 00:15:22.434 "num_base_bdevs": 3, 00:15:22.434 "num_base_bdevs_discovered": 2, 00:15:22.434 "num_base_bdevs_operational": 2, 00:15:22.434 "base_bdevs_list": [ 00:15:22.434 { 00:15:22.434 "name": null, 00:15:22.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.434 "is_configured": false, 00:15:22.434 "data_offset": 0, 00:15:22.434 "data_size": 63488 00:15:22.434 }, 00:15:22.434 { 00:15:22.434 "name": "BaseBdev2", 00:15:22.434 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:22.434 "is_configured": true, 00:15:22.434 "data_offset": 2048, 00:15:22.434 "data_size": 63488 00:15:22.434 }, 00:15:22.434 { 00:15:22.434 "name": "BaseBdev3", 00:15:22.434 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:22.434 "is_configured": true, 00:15:22.434 "data_offset": 2048, 00:15:22.434 "data_size": 63488 00:15:22.434 } 00:15:22.434 ] 00:15:22.434 }' 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.434 20:28:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.693 20:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.693 20:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.693 20:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.693 [2024-11-26 20:28:16.241522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.952 [2024-11-26 20:28:16.248473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:15:22.952 20:28:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.952 20:28:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:22.952 [2024-11-26 20:28:16.250976] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.889 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.890 "name": "raid_bdev1", 00:15:23.890 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:23.890 "strip_size_kb": 64, 00:15:23.890 "state": "online", 00:15:23.890 "raid_level": "raid5f", 00:15:23.890 "superblock": true, 00:15:23.890 "num_base_bdevs": 3, 00:15:23.890 "num_base_bdevs_discovered": 3, 00:15:23.890 "num_base_bdevs_operational": 3, 00:15:23.890 "process": { 00:15:23.890 "type": "rebuild", 00:15:23.890 "target": "spare", 00:15:23.890 "progress": { 00:15:23.890 "blocks": 20480, 00:15:23.890 "percent": 16 00:15:23.890 } 00:15:23.890 }, 00:15:23.890 "base_bdevs_list": [ 00:15:23.890 { 00:15:23.890 "name": "spare", 00:15:23.890 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:23.890 "is_configured": true, 00:15:23.890 "data_offset": 2048, 00:15:23.890 "data_size": 63488 00:15:23.890 }, 00:15:23.890 { 00:15:23.890 "name": "BaseBdev2", 00:15:23.890 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:23.890 "is_configured": true, 00:15:23.890 "data_offset": 2048, 00:15:23.890 "data_size": 63488 00:15:23.890 }, 00:15:23.890 { 00:15:23.890 "name": "BaseBdev3", 00:15:23.890 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:23.890 "is_configured": true, 00:15:23.890 "data_offset": 2048, 00:15:23.890 "data_size": 63488 00:15:23.890 } 00:15:23.890 ] 00:15:23.890 }' 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.890 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.890 [2024-11-26 20:28:17.367760] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.149 [2024-11-26 20:28:17.465338] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:24.149 [2024-11-26 20:28:17.465459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.149 [2024-11-26 20:28:17.465480] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.149 [2024-11-26 20:28:17.465496] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.149 "name": "raid_bdev1", 00:15:24.149 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:24.149 "strip_size_kb": 64, 00:15:24.149 "state": "online", 00:15:24.149 "raid_level": "raid5f", 00:15:24.149 "superblock": true, 00:15:24.149 "num_base_bdevs": 3, 00:15:24.149 "num_base_bdevs_discovered": 2, 00:15:24.149 "num_base_bdevs_operational": 2, 00:15:24.149 "base_bdevs_list": [ 00:15:24.149 { 00:15:24.149 "name": null, 00:15:24.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.149 "is_configured": false, 00:15:24.149 "data_offset": 0, 00:15:24.149 "data_size": 63488 00:15:24.149 }, 00:15:24.149 { 00:15:24.149 "name": "BaseBdev2", 00:15:24.149 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:24.149 "is_configured": true, 00:15:24.149 "data_offset": 2048, 00:15:24.149 "data_size": 63488 00:15:24.149 }, 00:15:24.149 { 00:15:24.149 "name": "BaseBdev3", 00:15:24.149 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:24.149 "is_configured": true, 00:15:24.149 "data_offset": 2048, 00:15:24.149 "data_size": 63488 00:15:24.149 } 00:15:24.149 ] 00:15:24.149 }' 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.149 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.407 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.407 "name": "raid_bdev1", 00:15:24.407 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:24.407 "strip_size_kb": 64, 00:15:24.407 "state": "online", 00:15:24.408 "raid_level": "raid5f", 00:15:24.408 "superblock": true, 00:15:24.408 "num_base_bdevs": 3, 00:15:24.408 "num_base_bdevs_discovered": 2, 00:15:24.408 "num_base_bdevs_operational": 2, 00:15:24.408 "base_bdevs_list": [ 00:15:24.408 { 00:15:24.408 "name": null, 00:15:24.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.408 "is_configured": false, 00:15:24.408 "data_offset": 0, 00:15:24.408 "data_size": 63488 00:15:24.408 }, 00:15:24.408 { 00:15:24.408 "name": "BaseBdev2", 00:15:24.408 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:24.408 "is_configured": true, 00:15:24.408 "data_offset": 2048, 00:15:24.408 "data_size": 63488 00:15:24.408 }, 00:15:24.408 { 00:15:24.408 "name": "BaseBdev3", 00:15:24.408 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:24.408 "is_configured": true, 00:15:24.408 "data_offset": 2048, 00:15:24.408 "data_size": 63488 00:15:24.408 } 00:15:24.408 ] 00:15:24.408 }' 00:15:24.408 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.758 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.758 20:28:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.758 20:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.758 20:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.758 20:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.758 20:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.758 [2024-11-26 20:28:18.034626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.758 [2024-11-26 20:28:18.041346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:15:24.758 20:28:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.758 20:28:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:24.758 [2024-11-26 20:28:18.043783] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.698 "name": "raid_bdev1", 00:15:25.698 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:25.698 "strip_size_kb": 64, 00:15:25.698 "state": "online", 00:15:25.698 "raid_level": "raid5f", 00:15:25.698 "superblock": true, 00:15:25.698 "num_base_bdevs": 3, 00:15:25.698 "num_base_bdevs_discovered": 3, 00:15:25.698 "num_base_bdevs_operational": 3, 00:15:25.698 "process": { 00:15:25.698 "type": "rebuild", 00:15:25.698 "target": "spare", 00:15:25.698 "progress": { 00:15:25.698 "blocks": 20480, 00:15:25.698 "percent": 16 00:15:25.698 } 00:15:25.698 }, 00:15:25.698 "base_bdevs_list": [ 00:15:25.698 { 00:15:25.698 "name": "spare", 00:15:25.698 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:25.698 "is_configured": true, 00:15:25.698 "data_offset": 2048, 00:15:25.698 "data_size": 63488 00:15:25.698 }, 00:15:25.698 { 00:15:25.698 "name": "BaseBdev2", 00:15:25.698 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:25.698 "is_configured": true, 00:15:25.698 "data_offset": 2048, 00:15:25.698 "data_size": 63488 00:15:25.698 }, 00:15:25.698 { 00:15:25.698 "name": "BaseBdev3", 00:15:25.698 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:25.698 "is_configured": true, 00:15:25.698 "data_offset": 2048, 00:15:25.698 "data_size": 63488 00:15:25.698 } 00:15:25.698 ] 00:15:25.698 }' 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:25.698 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.698 "name": "raid_bdev1", 00:15:25.698 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:25.698 "strip_size_kb": 64, 00:15:25.698 "state": "online", 00:15:25.698 "raid_level": "raid5f", 00:15:25.698 "superblock": true, 00:15:25.698 "num_base_bdevs": 3, 00:15:25.698 "num_base_bdevs_discovered": 3, 00:15:25.698 "num_base_bdevs_operational": 3, 00:15:25.698 "process": { 00:15:25.698 "type": "rebuild", 00:15:25.698 "target": "spare", 00:15:25.698 "progress": { 00:15:25.698 "blocks": 22528, 00:15:25.698 "percent": 17 00:15:25.698 } 00:15:25.698 }, 00:15:25.698 "base_bdevs_list": [ 00:15:25.698 { 00:15:25.698 "name": "spare", 00:15:25.698 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:25.698 "is_configured": true, 00:15:25.698 "data_offset": 2048, 00:15:25.698 "data_size": 63488 00:15:25.698 }, 00:15:25.698 { 00:15:25.698 "name": "BaseBdev2", 00:15:25.698 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:25.698 "is_configured": true, 00:15:25.698 "data_offset": 2048, 00:15:25.698 "data_size": 63488 00:15:25.698 }, 00:15:25.698 { 00:15:25.698 "name": "BaseBdev3", 00:15:25.698 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:25.698 "is_configured": true, 00:15:25.698 "data_offset": 2048, 00:15:25.698 "data_size": 63488 00:15:25.698 } 00:15:25.698 ] 00:15:25.698 }' 00:15:25.698 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.957 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.957 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.957 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.957 20:28:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.895 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.895 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.895 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.895 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.895 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.895 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.895 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.896 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.896 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.896 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.896 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.896 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.896 "name": "raid_bdev1", 00:15:26.896 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:26.896 "strip_size_kb": 64, 00:15:26.896 "state": "online", 00:15:26.896 "raid_level": "raid5f", 00:15:26.896 "superblock": true, 00:15:26.896 "num_base_bdevs": 3, 00:15:26.896 "num_base_bdevs_discovered": 3, 00:15:26.896 "num_base_bdevs_operational": 3, 00:15:26.896 "process": { 00:15:26.896 "type": "rebuild", 00:15:26.896 "target": "spare", 00:15:26.896 "progress": { 00:15:26.896 "blocks": 45056, 00:15:26.896 "percent": 35 00:15:26.896 } 00:15:26.896 }, 00:15:26.896 "base_bdevs_list": [ 00:15:26.896 { 00:15:26.896 "name": "spare", 00:15:26.896 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:26.896 "is_configured": true, 00:15:26.896 "data_offset": 2048, 00:15:26.896 "data_size": 63488 00:15:26.896 }, 00:15:26.896 { 00:15:26.896 "name": "BaseBdev2", 00:15:26.896 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:26.896 "is_configured": true, 00:15:26.896 "data_offset": 2048, 00:15:26.896 "data_size": 63488 00:15:26.896 }, 00:15:26.896 { 00:15:26.896 "name": "BaseBdev3", 00:15:26.896 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:26.896 "is_configured": true, 00:15:26.896 "data_offset": 2048, 00:15:26.896 "data_size": 63488 00:15:26.896 } 00:15:26.896 ] 00:15:26.896 }' 00:15:26.896 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.896 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.896 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.155 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.155 20:28:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.094 "name": "raid_bdev1", 00:15:28.094 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:28.094 "strip_size_kb": 64, 00:15:28.094 "state": "online", 00:15:28.094 "raid_level": "raid5f", 00:15:28.094 "superblock": true, 00:15:28.094 "num_base_bdevs": 3, 00:15:28.094 "num_base_bdevs_discovered": 3, 00:15:28.094 "num_base_bdevs_operational": 3, 00:15:28.094 "process": { 00:15:28.094 "type": "rebuild", 00:15:28.094 "target": "spare", 00:15:28.094 "progress": { 00:15:28.094 "blocks": 67584, 00:15:28.094 "percent": 53 00:15:28.094 } 00:15:28.094 }, 00:15:28.094 "base_bdevs_list": [ 00:15:28.094 { 00:15:28.094 "name": "spare", 00:15:28.094 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:28.094 "is_configured": true, 00:15:28.094 "data_offset": 2048, 00:15:28.094 "data_size": 63488 00:15:28.094 }, 00:15:28.094 { 00:15:28.094 "name": "BaseBdev2", 00:15:28.094 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:28.094 "is_configured": true, 00:15:28.094 "data_offset": 2048, 00:15:28.094 "data_size": 63488 00:15:28.094 }, 00:15:28.094 { 00:15:28.094 "name": "BaseBdev3", 00:15:28.094 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:28.094 "is_configured": true, 00:15:28.094 "data_offset": 2048, 00:15:28.094 "data_size": 63488 00:15:28.094 } 00:15:28.094 ] 00:15:28.094 }' 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.094 20:28:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.472 "name": "raid_bdev1", 00:15:29.472 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:29.472 "strip_size_kb": 64, 00:15:29.472 "state": "online", 00:15:29.472 "raid_level": "raid5f", 00:15:29.472 "superblock": true, 00:15:29.472 "num_base_bdevs": 3, 00:15:29.472 "num_base_bdevs_discovered": 3, 00:15:29.472 "num_base_bdevs_operational": 3, 00:15:29.472 "process": { 00:15:29.472 "type": "rebuild", 00:15:29.472 "target": "spare", 00:15:29.472 "progress": { 00:15:29.472 "blocks": 92160, 00:15:29.472 "percent": 72 00:15:29.472 } 00:15:29.472 }, 00:15:29.472 "base_bdevs_list": [ 00:15:29.472 { 00:15:29.472 "name": "spare", 00:15:29.472 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:29.472 "is_configured": true, 00:15:29.472 "data_offset": 2048, 00:15:29.472 "data_size": 63488 00:15:29.472 }, 00:15:29.472 { 00:15:29.472 "name": "BaseBdev2", 00:15:29.472 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:29.472 "is_configured": true, 00:15:29.472 "data_offset": 2048, 00:15:29.472 "data_size": 63488 00:15:29.472 }, 00:15:29.472 { 00:15:29.472 "name": "BaseBdev3", 00:15:29.472 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:29.472 "is_configured": true, 00:15:29.472 "data_offset": 2048, 00:15:29.472 "data_size": 63488 00:15:29.472 } 00:15:29.472 ] 00:15:29.472 }' 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.472 20:28:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.409 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.409 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.409 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.409 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.409 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.410 "name": "raid_bdev1", 00:15:30.410 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:30.410 "strip_size_kb": 64, 00:15:30.410 "state": "online", 00:15:30.410 "raid_level": "raid5f", 00:15:30.410 "superblock": true, 00:15:30.410 "num_base_bdevs": 3, 00:15:30.410 "num_base_bdevs_discovered": 3, 00:15:30.410 "num_base_bdevs_operational": 3, 00:15:30.410 "process": { 00:15:30.410 "type": "rebuild", 00:15:30.410 "target": "spare", 00:15:30.410 "progress": { 00:15:30.410 "blocks": 114688, 00:15:30.410 "percent": 90 00:15:30.410 } 00:15:30.410 }, 00:15:30.410 "base_bdevs_list": [ 00:15:30.410 { 00:15:30.410 "name": "spare", 00:15:30.410 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:30.410 "is_configured": true, 00:15:30.410 "data_offset": 2048, 00:15:30.410 "data_size": 63488 00:15:30.410 }, 00:15:30.410 { 00:15:30.410 "name": "BaseBdev2", 00:15:30.410 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:30.410 "is_configured": true, 00:15:30.410 "data_offset": 2048, 00:15:30.410 "data_size": 63488 00:15:30.410 }, 00:15:30.410 { 00:15:30.410 "name": "BaseBdev3", 00:15:30.410 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:30.410 "is_configured": true, 00:15:30.410 "data_offset": 2048, 00:15:30.410 "data_size": 63488 00:15:30.410 } 00:15:30.410 ] 00:15:30.410 }' 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.410 20:28:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.978 [2024-11-26 20:28:24.311644] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:30.978 [2024-11-26 20:28:24.311751] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:30.978 [2024-11-26 20:28:24.311913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.546 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.547 "name": "raid_bdev1", 00:15:31.547 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:31.547 "strip_size_kb": 64, 00:15:31.547 "state": "online", 00:15:31.547 "raid_level": "raid5f", 00:15:31.547 "superblock": true, 00:15:31.547 "num_base_bdevs": 3, 00:15:31.547 "num_base_bdevs_discovered": 3, 00:15:31.547 "num_base_bdevs_operational": 3, 00:15:31.547 "base_bdevs_list": [ 00:15:31.547 { 00:15:31.547 "name": "spare", 00:15:31.547 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:31.547 "is_configured": true, 00:15:31.547 "data_offset": 2048, 00:15:31.547 "data_size": 63488 00:15:31.547 }, 00:15:31.547 { 00:15:31.547 "name": "BaseBdev2", 00:15:31.547 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:31.547 "is_configured": true, 00:15:31.547 "data_offset": 2048, 00:15:31.547 "data_size": 63488 00:15:31.547 }, 00:15:31.547 { 00:15:31.547 "name": "BaseBdev3", 00:15:31.547 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:31.547 "is_configured": true, 00:15:31.547 "data_offset": 2048, 00:15:31.547 "data_size": 63488 00:15:31.547 } 00:15:31.547 ] 00:15:31.547 }' 00:15:31.547 20:28:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.547 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.806 "name": "raid_bdev1", 00:15:31.806 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:31.806 "strip_size_kb": 64, 00:15:31.806 "state": "online", 00:15:31.806 "raid_level": "raid5f", 00:15:31.806 "superblock": true, 00:15:31.806 "num_base_bdevs": 3, 00:15:31.806 "num_base_bdevs_discovered": 3, 00:15:31.806 "num_base_bdevs_operational": 3, 00:15:31.806 "base_bdevs_list": [ 00:15:31.806 { 00:15:31.806 "name": "spare", 00:15:31.806 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:31.806 "is_configured": true, 00:15:31.806 "data_offset": 2048, 00:15:31.806 "data_size": 63488 00:15:31.806 }, 00:15:31.806 { 00:15:31.806 "name": "BaseBdev2", 00:15:31.806 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:31.806 "is_configured": true, 00:15:31.806 "data_offset": 2048, 00:15:31.806 "data_size": 63488 00:15:31.806 }, 00:15:31.806 { 00:15:31.806 "name": "BaseBdev3", 00:15:31.806 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:31.806 "is_configured": true, 00:15:31.806 "data_offset": 2048, 00:15:31.806 "data_size": 63488 00:15:31.806 } 00:15:31.806 ] 00:15:31.806 }' 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.806 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.807 "name": "raid_bdev1", 00:15:31.807 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:31.807 "strip_size_kb": 64, 00:15:31.807 "state": "online", 00:15:31.807 "raid_level": "raid5f", 00:15:31.807 "superblock": true, 00:15:31.807 "num_base_bdevs": 3, 00:15:31.807 "num_base_bdevs_discovered": 3, 00:15:31.807 "num_base_bdevs_operational": 3, 00:15:31.807 "base_bdevs_list": [ 00:15:31.807 { 00:15:31.807 "name": "spare", 00:15:31.807 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:31.807 "is_configured": true, 00:15:31.807 "data_offset": 2048, 00:15:31.807 "data_size": 63488 00:15:31.807 }, 00:15:31.807 { 00:15:31.807 "name": "BaseBdev2", 00:15:31.807 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:31.807 "is_configured": true, 00:15:31.807 "data_offset": 2048, 00:15:31.807 "data_size": 63488 00:15:31.807 }, 00:15:31.807 { 00:15:31.807 "name": "BaseBdev3", 00:15:31.807 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:31.807 "is_configured": true, 00:15:31.807 "data_offset": 2048, 00:15:31.807 "data_size": 63488 00:15:31.807 } 00:15:31.807 ] 00:15:31.807 }' 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.807 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.100 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:32.100 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.100 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.403 [2024-11-26 20:28:25.646890] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.403 [2024-11-26 20:28:25.646935] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.403 [2024-11-26 20:28:25.647054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.403 [2024-11-26 20:28:25.647147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.403 [2024-11-26 20:28:25.647159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.403 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:32.403 /dev/nbd0 00:15:32.664 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.665 1+0 records in 00:15:32.665 1+0 records out 00:15:32.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411197 s, 10.0 MB/s 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.665 20:28:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:32.665 /dev/nbd1 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.925 1+0 records in 00:15:32.925 1+0 records out 00:15:32.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467173 s, 8.8 MB/s 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.925 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.185 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.445 [2024-11-26 20:28:26.818399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.445 [2024-11-26 20:28:26.818486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.445 [2024-11-26 20:28:26.818512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:33.445 [2024-11-26 20:28:26.818523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.445 [2024-11-26 20:28:26.821116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.445 [2024-11-26 20:28:26.821160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.445 [2024-11-26 20:28:26.821259] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:33.445 [2024-11-26 20:28:26.821323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.445 [2024-11-26 20:28:26.821456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.445 [2024-11-26 20:28:26.821585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.445 spare 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.445 [2024-11-26 20:28:26.921538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:33.445 [2024-11-26 20:28:26.921586] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.445 [2024-11-26 20:28:26.921971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:15:33.445 [2024-11-26 20:28:26.922525] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:33.445 [2024-11-26 20:28:26.922549] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:33.445 [2024-11-26 20:28:26.922764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.445 "name": "raid_bdev1", 00:15:33.445 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:33.445 "strip_size_kb": 64, 00:15:33.445 "state": "online", 00:15:33.445 "raid_level": "raid5f", 00:15:33.445 "superblock": true, 00:15:33.445 "num_base_bdevs": 3, 00:15:33.445 "num_base_bdevs_discovered": 3, 00:15:33.445 "num_base_bdevs_operational": 3, 00:15:33.445 "base_bdevs_list": [ 00:15:33.445 { 00:15:33.445 "name": "spare", 00:15:33.445 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:33.445 "is_configured": true, 00:15:33.445 "data_offset": 2048, 00:15:33.445 "data_size": 63488 00:15:33.445 }, 00:15:33.445 { 00:15:33.445 "name": "BaseBdev2", 00:15:33.445 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:33.445 "is_configured": true, 00:15:33.445 "data_offset": 2048, 00:15:33.445 "data_size": 63488 00:15:33.445 }, 00:15:33.445 { 00:15:33.445 "name": "BaseBdev3", 00:15:33.445 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:33.445 "is_configured": true, 00:15:33.445 "data_offset": 2048, 00:15:33.445 "data_size": 63488 00:15:33.445 } 00:15:33.445 ] 00:15:33.445 }' 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.445 20:28:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.015 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.015 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.016 "name": "raid_bdev1", 00:15:34.016 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:34.016 "strip_size_kb": 64, 00:15:34.016 "state": "online", 00:15:34.016 "raid_level": "raid5f", 00:15:34.016 "superblock": true, 00:15:34.016 "num_base_bdevs": 3, 00:15:34.016 "num_base_bdevs_discovered": 3, 00:15:34.016 "num_base_bdevs_operational": 3, 00:15:34.016 "base_bdevs_list": [ 00:15:34.016 { 00:15:34.016 "name": "spare", 00:15:34.016 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:34.016 "is_configured": true, 00:15:34.016 "data_offset": 2048, 00:15:34.016 "data_size": 63488 00:15:34.016 }, 00:15:34.016 { 00:15:34.016 "name": "BaseBdev2", 00:15:34.016 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:34.016 "is_configured": true, 00:15:34.016 "data_offset": 2048, 00:15:34.016 "data_size": 63488 00:15:34.016 }, 00:15:34.016 { 00:15:34.016 "name": "BaseBdev3", 00:15:34.016 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:34.016 "is_configured": true, 00:15:34.016 "data_offset": 2048, 00:15:34.016 "data_size": 63488 00:15:34.016 } 00:15:34.016 ] 00:15:34.016 }' 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.016 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.275 [2024-11-26 20:28:27.577696] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.275 "name": "raid_bdev1", 00:15:34.275 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:34.275 "strip_size_kb": 64, 00:15:34.275 "state": "online", 00:15:34.275 "raid_level": "raid5f", 00:15:34.275 "superblock": true, 00:15:34.275 "num_base_bdevs": 3, 00:15:34.275 "num_base_bdevs_discovered": 2, 00:15:34.275 "num_base_bdevs_operational": 2, 00:15:34.275 "base_bdevs_list": [ 00:15:34.275 { 00:15:34.275 "name": null, 00:15:34.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.275 "is_configured": false, 00:15:34.275 "data_offset": 0, 00:15:34.275 "data_size": 63488 00:15:34.275 }, 00:15:34.275 { 00:15:34.275 "name": "BaseBdev2", 00:15:34.275 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:34.275 "is_configured": true, 00:15:34.275 "data_offset": 2048, 00:15:34.275 "data_size": 63488 00:15:34.275 }, 00:15:34.275 { 00:15:34.275 "name": "BaseBdev3", 00:15:34.275 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:34.275 "is_configured": true, 00:15:34.275 "data_offset": 2048, 00:15:34.275 "data_size": 63488 00:15:34.275 } 00:15:34.275 ] 00:15:34.275 }' 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.275 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.534 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.534 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.534 20:28:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.534 [2024-11-26 20:28:28.005002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.534 [2024-11-26 20:28:28.005223] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:34.534 [2024-11-26 20:28:28.005246] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:34.534 [2024-11-26 20:28:28.005289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.534 [2024-11-26 20:28:28.011810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:15:34.534 20:28:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.534 20:28:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:34.534 [2024-11-26 20:28:28.014165] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.470 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.470 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.470 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.470 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.470 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.729 "name": "raid_bdev1", 00:15:35.729 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:35.729 "strip_size_kb": 64, 00:15:35.729 "state": "online", 00:15:35.729 "raid_level": "raid5f", 00:15:35.729 "superblock": true, 00:15:35.729 "num_base_bdevs": 3, 00:15:35.729 "num_base_bdevs_discovered": 3, 00:15:35.729 "num_base_bdevs_operational": 3, 00:15:35.729 "process": { 00:15:35.729 "type": "rebuild", 00:15:35.729 "target": "spare", 00:15:35.729 "progress": { 00:15:35.729 "blocks": 20480, 00:15:35.729 "percent": 16 00:15:35.729 } 00:15:35.729 }, 00:15:35.729 "base_bdevs_list": [ 00:15:35.729 { 00:15:35.729 "name": "spare", 00:15:35.729 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:35.729 "is_configured": true, 00:15:35.729 "data_offset": 2048, 00:15:35.729 "data_size": 63488 00:15:35.729 }, 00:15:35.729 { 00:15:35.729 "name": "BaseBdev2", 00:15:35.729 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:35.729 "is_configured": true, 00:15:35.729 "data_offset": 2048, 00:15:35.729 "data_size": 63488 00:15:35.729 }, 00:15:35.729 { 00:15:35.729 "name": "BaseBdev3", 00:15:35.729 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:35.729 "is_configured": true, 00:15:35.729 "data_offset": 2048, 00:15:35.729 "data_size": 63488 00:15:35.729 } 00:15:35.729 ] 00:15:35.729 }' 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.729 [2024-11-26 20:28:29.166784] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.729 [2024-11-26 20:28:29.227987] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.729 [2024-11-26 20:28:29.228078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.729 [2024-11-26 20:28:29.228102] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.729 [2024-11-26 20:28:29.228112] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.729 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.730 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.989 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.989 "name": "raid_bdev1", 00:15:35.989 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:35.989 "strip_size_kb": 64, 00:15:35.989 "state": "online", 00:15:35.989 "raid_level": "raid5f", 00:15:35.989 "superblock": true, 00:15:35.989 "num_base_bdevs": 3, 00:15:35.989 "num_base_bdevs_discovered": 2, 00:15:35.989 "num_base_bdevs_operational": 2, 00:15:35.989 "base_bdevs_list": [ 00:15:35.989 { 00:15:35.989 "name": null, 00:15:35.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.989 "is_configured": false, 00:15:35.989 "data_offset": 0, 00:15:35.989 "data_size": 63488 00:15:35.989 }, 00:15:35.989 { 00:15:35.989 "name": "BaseBdev2", 00:15:35.989 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:35.989 "is_configured": true, 00:15:35.989 "data_offset": 2048, 00:15:35.989 "data_size": 63488 00:15:35.989 }, 00:15:35.989 { 00:15:35.989 "name": "BaseBdev3", 00:15:35.989 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:35.989 "is_configured": true, 00:15:35.989 "data_offset": 2048, 00:15:35.989 "data_size": 63488 00:15:35.989 } 00:15:35.989 ] 00:15:35.989 }' 00:15:35.989 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.989 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.249 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.249 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.249 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.249 [2024-11-26 20:28:29.660708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.249 [2024-11-26 20:28:29.660785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.249 [2024-11-26 20:28:29.660810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:36.249 [2024-11-26 20:28:29.660820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.249 [2024-11-26 20:28:29.661371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.249 [2024-11-26 20:28:29.661403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.249 [2024-11-26 20:28:29.661510] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:36.249 [2024-11-26 20:28:29.661534] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:36.249 [2024-11-26 20:28:29.661548] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:36.249 [2024-11-26 20:28:29.661575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.249 [2024-11-26 20:28:29.668099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:36.249 spare 00:15:36.249 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.249 20:28:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:36.249 [2024-11-26 20:28:29.670492] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.265 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.265 "name": "raid_bdev1", 00:15:37.265 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:37.265 "strip_size_kb": 64, 00:15:37.265 "state": "online", 00:15:37.265 "raid_level": "raid5f", 00:15:37.265 "superblock": true, 00:15:37.265 "num_base_bdevs": 3, 00:15:37.265 "num_base_bdevs_discovered": 3, 00:15:37.265 "num_base_bdevs_operational": 3, 00:15:37.265 "process": { 00:15:37.265 "type": "rebuild", 00:15:37.265 "target": "spare", 00:15:37.265 "progress": { 00:15:37.265 "blocks": 20480, 00:15:37.265 "percent": 16 00:15:37.265 } 00:15:37.265 }, 00:15:37.265 "base_bdevs_list": [ 00:15:37.265 { 00:15:37.265 "name": "spare", 00:15:37.265 "uuid": "551a5019-425b-5730-91fd-3b50a4088547", 00:15:37.265 "is_configured": true, 00:15:37.265 "data_offset": 2048, 00:15:37.265 "data_size": 63488 00:15:37.265 }, 00:15:37.265 { 00:15:37.265 "name": "BaseBdev2", 00:15:37.265 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:37.265 "is_configured": true, 00:15:37.265 "data_offset": 2048, 00:15:37.265 "data_size": 63488 00:15:37.265 }, 00:15:37.265 { 00:15:37.265 "name": "BaseBdev3", 00:15:37.265 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:37.265 "is_configured": true, 00:15:37.266 "data_offset": 2048, 00:15:37.266 "data_size": 63488 00:15:37.266 } 00:15:37.266 ] 00:15:37.266 }' 00:15:37.266 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.266 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.266 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.525 [2024-11-26 20:28:30.835707] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.525 [2024-11-26 20:28:30.883816] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.525 [2024-11-26 20:28:30.883905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.525 [2024-11-26 20:28:30.883924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.525 [2024-11-26 20:28:30.883938] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.525 "name": "raid_bdev1", 00:15:37.525 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:37.525 "strip_size_kb": 64, 00:15:37.525 "state": "online", 00:15:37.525 "raid_level": "raid5f", 00:15:37.525 "superblock": true, 00:15:37.525 "num_base_bdevs": 3, 00:15:37.525 "num_base_bdevs_discovered": 2, 00:15:37.525 "num_base_bdevs_operational": 2, 00:15:37.525 "base_bdevs_list": [ 00:15:37.525 { 00:15:37.525 "name": null, 00:15:37.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.525 "is_configured": false, 00:15:37.525 "data_offset": 0, 00:15:37.525 "data_size": 63488 00:15:37.525 }, 00:15:37.525 { 00:15:37.525 "name": "BaseBdev2", 00:15:37.525 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:37.525 "is_configured": true, 00:15:37.525 "data_offset": 2048, 00:15:37.525 "data_size": 63488 00:15:37.525 }, 00:15:37.525 { 00:15:37.525 "name": "BaseBdev3", 00:15:37.525 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:37.525 "is_configured": true, 00:15:37.525 "data_offset": 2048, 00:15:37.525 "data_size": 63488 00:15:37.525 } 00:15:37.525 ] 00:15:37.525 }' 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.525 20:28:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.093 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:38.093 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.093 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.094 "name": "raid_bdev1", 00:15:38.094 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:38.094 "strip_size_kb": 64, 00:15:38.094 "state": "online", 00:15:38.094 "raid_level": "raid5f", 00:15:38.094 "superblock": true, 00:15:38.094 "num_base_bdevs": 3, 00:15:38.094 "num_base_bdevs_discovered": 2, 00:15:38.094 "num_base_bdevs_operational": 2, 00:15:38.094 "base_bdevs_list": [ 00:15:38.094 { 00:15:38.094 "name": null, 00:15:38.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.094 "is_configured": false, 00:15:38.094 "data_offset": 0, 00:15:38.094 "data_size": 63488 00:15:38.094 }, 00:15:38.094 { 00:15:38.094 "name": "BaseBdev2", 00:15:38.094 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:38.094 "is_configured": true, 00:15:38.094 "data_offset": 2048, 00:15:38.094 "data_size": 63488 00:15:38.094 }, 00:15:38.094 { 00:15:38.094 "name": "BaseBdev3", 00:15:38.094 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:38.094 "is_configured": true, 00:15:38.094 "data_offset": 2048, 00:15:38.094 "data_size": 63488 00:15:38.094 } 00:15:38.094 ] 00:15:38.094 }' 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.094 [2024-11-26 20:28:31.515990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:38.094 [2024-11-26 20:28:31.516052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.094 [2024-11-26 20:28:31.516076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:38.094 [2024-11-26 20:28:31.516087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.094 [2024-11-26 20:28:31.516492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.094 [2024-11-26 20:28:31.516517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:38.094 [2024-11-26 20:28:31.516589] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:38.094 [2024-11-26 20:28:31.516611] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:38.094 [2024-11-26 20:28:31.516639] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:38.094 [2024-11-26 20:28:31.516652] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:38.094 BaseBdev1 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.094 20:28:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.032 "name": "raid_bdev1", 00:15:39.032 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:39.032 "strip_size_kb": 64, 00:15:39.032 "state": "online", 00:15:39.032 "raid_level": "raid5f", 00:15:39.032 "superblock": true, 00:15:39.032 "num_base_bdevs": 3, 00:15:39.032 "num_base_bdevs_discovered": 2, 00:15:39.032 "num_base_bdevs_operational": 2, 00:15:39.032 "base_bdevs_list": [ 00:15:39.032 { 00:15:39.032 "name": null, 00:15:39.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.032 "is_configured": false, 00:15:39.032 "data_offset": 0, 00:15:39.032 "data_size": 63488 00:15:39.032 }, 00:15:39.032 { 00:15:39.032 "name": "BaseBdev2", 00:15:39.032 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:39.032 "is_configured": true, 00:15:39.032 "data_offset": 2048, 00:15:39.032 "data_size": 63488 00:15:39.032 }, 00:15:39.032 { 00:15:39.032 "name": "BaseBdev3", 00:15:39.032 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:39.032 "is_configured": true, 00:15:39.032 "data_offset": 2048, 00:15:39.032 "data_size": 63488 00:15:39.032 } 00:15:39.032 ] 00:15:39.032 }' 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.032 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.602 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.602 "name": "raid_bdev1", 00:15:39.602 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:39.602 "strip_size_kb": 64, 00:15:39.602 "state": "online", 00:15:39.602 "raid_level": "raid5f", 00:15:39.602 "superblock": true, 00:15:39.602 "num_base_bdevs": 3, 00:15:39.602 "num_base_bdevs_discovered": 2, 00:15:39.602 "num_base_bdevs_operational": 2, 00:15:39.602 "base_bdevs_list": [ 00:15:39.602 { 00:15:39.602 "name": null, 00:15:39.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.602 "is_configured": false, 00:15:39.602 "data_offset": 0, 00:15:39.602 "data_size": 63488 00:15:39.602 }, 00:15:39.602 { 00:15:39.602 "name": "BaseBdev2", 00:15:39.602 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:39.602 "is_configured": true, 00:15:39.602 "data_offset": 2048, 00:15:39.602 "data_size": 63488 00:15:39.602 }, 00:15:39.602 { 00:15:39.602 "name": "BaseBdev3", 00:15:39.602 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:39.602 "is_configured": true, 00:15:39.602 "data_offset": 2048, 00:15:39.602 "data_size": 63488 00:15:39.603 } 00:15:39.603 ] 00:15:39.603 }' 00:15:39.603 20:28:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.603 [2024-11-26 20:28:33.097327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.603 [2024-11-26 20:28:33.097521] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.603 [2024-11-26 20:28:33.097547] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:39.603 request: 00:15:39.603 { 00:15:39.603 "base_bdev": "BaseBdev1", 00:15:39.603 "raid_bdev": "raid_bdev1", 00:15:39.603 "method": "bdev_raid_add_base_bdev", 00:15:39.603 "req_id": 1 00:15:39.603 } 00:15:39.603 Got JSON-RPC error response 00:15:39.603 response: 00:15:39.603 { 00:15:39.603 "code": -22, 00:15:39.603 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:39.603 } 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.603 20:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.982 "name": "raid_bdev1", 00:15:40.982 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:40.982 "strip_size_kb": 64, 00:15:40.982 "state": "online", 00:15:40.982 "raid_level": "raid5f", 00:15:40.982 "superblock": true, 00:15:40.982 "num_base_bdevs": 3, 00:15:40.982 "num_base_bdevs_discovered": 2, 00:15:40.982 "num_base_bdevs_operational": 2, 00:15:40.982 "base_bdevs_list": [ 00:15:40.982 { 00:15:40.982 "name": null, 00:15:40.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.982 "is_configured": false, 00:15:40.982 "data_offset": 0, 00:15:40.982 "data_size": 63488 00:15:40.982 }, 00:15:40.982 { 00:15:40.982 "name": "BaseBdev2", 00:15:40.982 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:40.982 "is_configured": true, 00:15:40.982 "data_offset": 2048, 00:15:40.982 "data_size": 63488 00:15:40.982 }, 00:15:40.982 { 00:15:40.982 "name": "BaseBdev3", 00:15:40.982 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:40.982 "is_configured": true, 00:15:40.982 "data_offset": 2048, 00:15:40.982 "data_size": 63488 00:15:40.982 } 00:15:40.982 ] 00:15:40.982 }' 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.982 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.242 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.242 "name": "raid_bdev1", 00:15:41.242 "uuid": "de18f001-fe25-41cc-83d0-7fe1ebdce5d2", 00:15:41.242 "strip_size_kb": 64, 00:15:41.242 "state": "online", 00:15:41.242 "raid_level": "raid5f", 00:15:41.242 "superblock": true, 00:15:41.242 "num_base_bdevs": 3, 00:15:41.242 "num_base_bdevs_discovered": 2, 00:15:41.242 "num_base_bdevs_operational": 2, 00:15:41.242 "base_bdevs_list": [ 00:15:41.242 { 00:15:41.242 "name": null, 00:15:41.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.242 "is_configured": false, 00:15:41.242 "data_offset": 0, 00:15:41.242 "data_size": 63488 00:15:41.242 }, 00:15:41.242 { 00:15:41.242 "name": "BaseBdev2", 00:15:41.242 "uuid": "c5c1af3e-784b-542b-9ffa-de6b7edb912d", 00:15:41.242 "is_configured": true, 00:15:41.242 "data_offset": 2048, 00:15:41.242 "data_size": 63488 00:15:41.242 }, 00:15:41.242 { 00:15:41.242 "name": "BaseBdev3", 00:15:41.242 "uuid": "052a4371-7c2e-5212-84c4-71125838bb95", 00:15:41.242 "is_configured": true, 00:15:41.242 "data_offset": 2048, 00:15:41.242 "data_size": 63488 00:15:41.242 } 00:15:41.242 ] 00:15:41.243 }' 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 93100 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93100 ']' 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 93100 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93100 00:15:41.243 killing process with pid 93100 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93100' 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 93100 00:15:41.243 Received shutdown signal, test time was about 60.000000 seconds 00:15:41.243 00:15:41.243 Latency(us) 00:15:41.243 [2024-11-26T20:28:34.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.243 [2024-11-26T20:28:34.795Z] =================================================================================================================== 00:15:41.243 [2024-11-26T20:28:34.795Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:41.243 20:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 93100 00:15:41.243 [2024-11-26 20:28:34.754022] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.243 [2024-11-26 20:28:34.754155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.243 [2024-11-26 20:28:34.754237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.243 [2024-11-26 20:28:34.754248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:41.502 [2024-11-26 20:28:34.820157] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.762 20:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:41.762 00:15:41.762 real 0m21.955s 00:15:41.762 user 0m28.558s 00:15:41.762 sys 0m2.792s 00:15:41.762 20:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.762 20:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.762 ************************************ 00:15:41.762 END TEST raid5f_rebuild_test_sb 00:15:41.762 ************************************ 00:15:41.762 20:28:35 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:41.762 20:28:35 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:41.762 20:28:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:41.762 20:28:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.762 20:28:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.762 ************************************ 00:15:41.762 START TEST raid5f_state_function_test 00:15:41.762 ************************************ 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93835 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:41.762 Process raid pid: 93835 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93835' 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93835 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93835 ']' 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.762 20:28:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.021 [2024-11-26 20:28:35.326244] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:42.021 [2024-11-26 20:28:35.326369] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.021 [2024-11-26 20:28:35.488095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.021 [2024-11-26 20:28:35.561881] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.278 [2024-11-26 20:28:35.633199] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.279 [2024-11-26 20:28:35.633241] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.846 [2024-11-26 20:28:36.175062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.846 [2024-11-26 20:28:36.175119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.846 [2024-11-26 20:28:36.175132] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.846 [2024-11-26 20:28:36.175144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.846 [2024-11-26 20:28:36.175151] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.846 [2024-11-26 20:28:36.175165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.846 [2024-11-26 20:28:36.175172] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.846 [2024-11-26 20:28:36.175182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.846 "name": "Existed_Raid", 00:15:42.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.846 "strip_size_kb": 64, 00:15:42.846 "state": "configuring", 00:15:42.846 "raid_level": "raid5f", 00:15:42.846 "superblock": false, 00:15:42.846 "num_base_bdevs": 4, 00:15:42.846 "num_base_bdevs_discovered": 0, 00:15:42.846 "num_base_bdevs_operational": 4, 00:15:42.846 "base_bdevs_list": [ 00:15:42.846 { 00:15:42.846 "name": "BaseBdev1", 00:15:42.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.846 "is_configured": false, 00:15:42.846 "data_offset": 0, 00:15:42.846 "data_size": 0 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "name": "BaseBdev2", 00:15:42.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.846 "is_configured": false, 00:15:42.846 "data_offset": 0, 00:15:42.846 "data_size": 0 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "name": "BaseBdev3", 00:15:42.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.846 "is_configured": false, 00:15:42.846 "data_offset": 0, 00:15:42.846 "data_size": 0 00:15:42.846 }, 00:15:42.846 { 00:15:42.846 "name": "BaseBdev4", 00:15:42.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.846 "is_configured": false, 00:15:42.846 "data_offset": 0, 00:15:42.846 "data_size": 0 00:15:42.846 } 00:15:42.846 ] 00:15:42.846 }' 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.846 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.105 [2024-11-26 20:28:36.622178] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.105 [2024-11-26 20:28:36.622232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.105 [2024-11-26 20:28:36.634206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.105 [2024-11-26 20:28:36.634248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.105 [2024-11-26 20:28:36.634256] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.105 [2024-11-26 20:28:36.634265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.105 [2024-11-26 20:28:36.634271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.105 [2024-11-26 20:28:36.634280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.105 [2024-11-26 20:28:36.634286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:43.105 [2024-11-26 20:28:36.634294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.105 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.105 [2024-11-26 20:28:36.655784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.364 BaseBdev1 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.364 [ 00:15:43.364 { 00:15:43.364 "name": "BaseBdev1", 00:15:43.364 "aliases": [ 00:15:43.364 "9fb76b92-7659-4521-8246-11b987ea67ce" 00:15:43.364 ], 00:15:43.364 "product_name": "Malloc disk", 00:15:43.364 "block_size": 512, 00:15:43.364 "num_blocks": 65536, 00:15:43.364 "uuid": "9fb76b92-7659-4521-8246-11b987ea67ce", 00:15:43.364 "assigned_rate_limits": { 00:15:43.364 "rw_ios_per_sec": 0, 00:15:43.364 "rw_mbytes_per_sec": 0, 00:15:43.364 "r_mbytes_per_sec": 0, 00:15:43.364 "w_mbytes_per_sec": 0 00:15:43.364 }, 00:15:43.364 "claimed": true, 00:15:43.364 "claim_type": "exclusive_write", 00:15:43.364 "zoned": false, 00:15:43.364 "supported_io_types": { 00:15:43.364 "read": true, 00:15:43.364 "write": true, 00:15:43.364 "unmap": true, 00:15:43.364 "flush": true, 00:15:43.364 "reset": true, 00:15:43.364 "nvme_admin": false, 00:15:43.364 "nvme_io": false, 00:15:43.364 "nvme_io_md": false, 00:15:43.364 "write_zeroes": true, 00:15:43.364 "zcopy": true, 00:15:43.364 "get_zone_info": false, 00:15:43.364 "zone_management": false, 00:15:43.364 "zone_append": false, 00:15:43.364 "compare": false, 00:15:43.364 "compare_and_write": false, 00:15:43.364 "abort": true, 00:15:43.364 "seek_hole": false, 00:15:43.364 "seek_data": false, 00:15:43.364 "copy": true, 00:15:43.364 "nvme_iov_md": false 00:15:43.364 }, 00:15:43.364 "memory_domains": [ 00:15:43.364 { 00:15:43.364 "dma_device_id": "system", 00:15:43.364 "dma_device_type": 1 00:15:43.364 }, 00:15:43.364 { 00:15:43.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.364 "dma_device_type": 2 00:15:43.364 } 00:15:43.364 ], 00:15:43.364 "driver_specific": {} 00:15:43.364 } 00:15:43.364 ] 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.364 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.365 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.365 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.365 "name": "Existed_Raid", 00:15:43.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.365 "strip_size_kb": 64, 00:15:43.365 "state": "configuring", 00:15:43.365 "raid_level": "raid5f", 00:15:43.365 "superblock": false, 00:15:43.365 "num_base_bdevs": 4, 00:15:43.365 "num_base_bdevs_discovered": 1, 00:15:43.365 "num_base_bdevs_operational": 4, 00:15:43.365 "base_bdevs_list": [ 00:15:43.365 { 00:15:43.365 "name": "BaseBdev1", 00:15:43.365 "uuid": "9fb76b92-7659-4521-8246-11b987ea67ce", 00:15:43.365 "is_configured": true, 00:15:43.365 "data_offset": 0, 00:15:43.365 "data_size": 65536 00:15:43.365 }, 00:15:43.365 { 00:15:43.365 "name": "BaseBdev2", 00:15:43.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.365 "is_configured": false, 00:15:43.365 "data_offset": 0, 00:15:43.365 "data_size": 0 00:15:43.365 }, 00:15:43.365 { 00:15:43.365 "name": "BaseBdev3", 00:15:43.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.365 "is_configured": false, 00:15:43.365 "data_offset": 0, 00:15:43.365 "data_size": 0 00:15:43.365 }, 00:15:43.365 { 00:15:43.365 "name": "BaseBdev4", 00:15:43.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.365 "is_configured": false, 00:15:43.365 "data_offset": 0, 00:15:43.365 "data_size": 0 00:15:43.365 } 00:15:43.365 ] 00:15:43.365 }' 00:15:43.365 20:28:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.365 20:28:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 [2024-11-26 20:28:37.099110] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.624 [2024-11-26 20:28:37.099178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 [2024-11-26 20:28:37.111104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.624 [2024-11-26 20:28:37.113132] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.624 [2024-11-26 20:28:37.113175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.624 [2024-11-26 20:28:37.113184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.624 [2024-11-26 20:28:37.113194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.624 [2024-11-26 20:28:37.113201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:43.624 [2024-11-26 20:28:37.113209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.624 "name": "Existed_Raid", 00:15:43.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.624 "strip_size_kb": 64, 00:15:43.624 "state": "configuring", 00:15:43.624 "raid_level": "raid5f", 00:15:43.624 "superblock": false, 00:15:43.624 "num_base_bdevs": 4, 00:15:43.624 "num_base_bdevs_discovered": 1, 00:15:43.624 "num_base_bdevs_operational": 4, 00:15:43.624 "base_bdevs_list": [ 00:15:43.624 { 00:15:43.624 "name": "BaseBdev1", 00:15:43.624 "uuid": "9fb76b92-7659-4521-8246-11b987ea67ce", 00:15:43.624 "is_configured": true, 00:15:43.624 "data_offset": 0, 00:15:43.624 "data_size": 65536 00:15:43.624 }, 00:15:43.624 { 00:15:43.624 "name": "BaseBdev2", 00:15:43.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.624 "is_configured": false, 00:15:43.624 "data_offset": 0, 00:15:43.624 "data_size": 0 00:15:43.624 }, 00:15:43.624 { 00:15:43.624 "name": "BaseBdev3", 00:15:43.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.624 "is_configured": false, 00:15:43.624 "data_offset": 0, 00:15:43.624 "data_size": 0 00:15:43.624 }, 00:15:43.624 { 00:15:43.624 "name": "BaseBdev4", 00:15:43.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.624 "is_configured": false, 00:15:43.624 "data_offset": 0, 00:15:43.624 "data_size": 0 00:15:43.624 } 00:15:43.624 ] 00:15:43.624 }' 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.624 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.192 [2024-11-26 20:28:37.549512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.192 BaseBdev2 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.192 [ 00:15:44.192 { 00:15:44.192 "name": "BaseBdev2", 00:15:44.192 "aliases": [ 00:15:44.192 "ee54e51e-5774-4ead-9cf3-e14c1d4b220b" 00:15:44.192 ], 00:15:44.192 "product_name": "Malloc disk", 00:15:44.192 "block_size": 512, 00:15:44.192 "num_blocks": 65536, 00:15:44.192 "uuid": "ee54e51e-5774-4ead-9cf3-e14c1d4b220b", 00:15:44.192 "assigned_rate_limits": { 00:15:44.192 "rw_ios_per_sec": 0, 00:15:44.192 "rw_mbytes_per_sec": 0, 00:15:44.192 "r_mbytes_per_sec": 0, 00:15:44.192 "w_mbytes_per_sec": 0 00:15:44.192 }, 00:15:44.192 "claimed": true, 00:15:44.192 "claim_type": "exclusive_write", 00:15:44.192 "zoned": false, 00:15:44.192 "supported_io_types": { 00:15:44.192 "read": true, 00:15:44.192 "write": true, 00:15:44.192 "unmap": true, 00:15:44.192 "flush": true, 00:15:44.192 "reset": true, 00:15:44.192 "nvme_admin": false, 00:15:44.192 "nvme_io": false, 00:15:44.192 "nvme_io_md": false, 00:15:44.192 "write_zeroes": true, 00:15:44.192 "zcopy": true, 00:15:44.192 "get_zone_info": false, 00:15:44.192 "zone_management": false, 00:15:44.192 "zone_append": false, 00:15:44.192 "compare": false, 00:15:44.192 "compare_and_write": false, 00:15:44.192 "abort": true, 00:15:44.192 "seek_hole": false, 00:15:44.192 "seek_data": false, 00:15:44.192 "copy": true, 00:15:44.192 "nvme_iov_md": false 00:15:44.192 }, 00:15:44.192 "memory_domains": [ 00:15:44.192 { 00:15:44.192 "dma_device_id": "system", 00:15:44.192 "dma_device_type": 1 00:15:44.192 }, 00:15:44.192 { 00:15:44.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.192 "dma_device_type": 2 00:15:44.192 } 00:15:44.192 ], 00:15:44.192 "driver_specific": {} 00:15:44.192 } 00:15:44.192 ] 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.192 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.192 "name": "Existed_Raid", 00:15:44.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.192 "strip_size_kb": 64, 00:15:44.192 "state": "configuring", 00:15:44.192 "raid_level": "raid5f", 00:15:44.192 "superblock": false, 00:15:44.192 "num_base_bdevs": 4, 00:15:44.192 "num_base_bdevs_discovered": 2, 00:15:44.192 "num_base_bdevs_operational": 4, 00:15:44.192 "base_bdevs_list": [ 00:15:44.192 { 00:15:44.192 "name": "BaseBdev1", 00:15:44.192 "uuid": "9fb76b92-7659-4521-8246-11b987ea67ce", 00:15:44.192 "is_configured": true, 00:15:44.192 "data_offset": 0, 00:15:44.192 "data_size": 65536 00:15:44.192 }, 00:15:44.192 { 00:15:44.192 "name": "BaseBdev2", 00:15:44.192 "uuid": "ee54e51e-5774-4ead-9cf3-e14c1d4b220b", 00:15:44.192 "is_configured": true, 00:15:44.192 "data_offset": 0, 00:15:44.192 "data_size": 65536 00:15:44.192 }, 00:15:44.192 { 00:15:44.192 "name": "BaseBdev3", 00:15:44.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.192 "is_configured": false, 00:15:44.192 "data_offset": 0, 00:15:44.192 "data_size": 0 00:15:44.192 }, 00:15:44.192 { 00:15:44.192 "name": "BaseBdev4", 00:15:44.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.192 "is_configured": false, 00:15:44.192 "data_offset": 0, 00:15:44.193 "data_size": 0 00:15:44.193 } 00:15:44.193 ] 00:15:44.193 }' 00:15:44.193 20:28:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.193 20:28:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.452 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.452 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.712 [2024-11-26 20:28:38.021345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.712 BaseBdev3 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.712 [ 00:15:44.712 { 00:15:44.712 "name": "BaseBdev3", 00:15:44.712 "aliases": [ 00:15:44.712 "b67796bb-c748-4564-8dea-1dcfdfcec9e6" 00:15:44.712 ], 00:15:44.712 "product_name": "Malloc disk", 00:15:44.712 "block_size": 512, 00:15:44.712 "num_blocks": 65536, 00:15:44.712 "uuid": "b67796bb-c748-4564-8dea-1dcfdfcec9e6", 00:15:44.712 "assigned_rate_limits": { 00:15:44.712 "rw_ios_per_sec": 0, 00:15:44.712 "rw_mbytes_per_sec": 0, 00:15:44.712 "r_mbytes_per_sec": 0, 00:15:44.712 "w_mbytes_per_sec": 0 00:15:44.712 }, 00:15:44.712 "claimed": true, 00:15:44.712 "claim_type": "exclusive_write", 00:15:44.712 "zoned": false, 00:15:44.712 "supported_io_types": { 00:15:44.712 "read": true, 00:15:44.712 "write": true, 00:15:44.712 "unmap": true, 00:15:44.712 "flush": true, 00:15:44.712 "reset": true, 00:15:44.712 "nvme_admin": false, 00:15:44.712 "nvme_io": false, 00:15:44.712 "nvme_io_md": false, 00:15:44.712 "write_zeroes": true, 00:15:44.712 "zcopy": true, 00:15:44.712 "get_zone_info": false, 00:15:44.712 "zone_management": false, 00:15:44.712 "zone_append": false, 00:15:44.712 "compare": false, 00:15:44.712 "compare_and_write": false, 00:15:44.712 "abort": true, 00:15:44.712 "seek_hole": false, 00:15:44.712 "seek_data": false, 00:15:44.712 "copy": true, 00:15:44.712 "nvme_iov_md": false 00:15:44.712 }, 00:15:44.712 "memory_domains": [ 00:15:44.712 { 00:15:44.712 "dma_device_id": "system", 00:15:44.712 "dma_device_type": 1 00:15:44.712 }, 00:15:44.712 { 00:15:44.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.712 "dma_device_type": 2 00:15:44.712 } 00:15:44.712 ], 00:15:44.712 "driver_specific": {} 00:15:44.712 } 00:15:44.712 ] 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.712 "name": "Existed_Raid", 00:15:44.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.712 "strip_size_kb": 64, 00:15:44.712 "state": "configuring", 00:15:44.712 "raid_level": "raid5f", 00:15:44.712 "superblock": false, 00:15:44.712 "num_base_bdevs": 4, 00:15:44.712 "num_base_bdevs_discovered": 3, 00:15:44.712 "num_base_bdevs_operational": 4, 00:15:44.712 "base_bdevs_list": [ 00:15:44.712 { 00:15:44.712 "name": "BaseBdev1", 00:15:44.712 "uuid": "9fb76b92-7659-4521-8246-11b987ea67ce", 00:15:44.712 "is_configured": true, 00:15:44.712 "data_offset": 0, 00:15:44.712 "data_size": 65536 00:15:44.712 }, 00:15:44.712 { 00:15:44.712 "name": "BaseBdev2", 00:15:44.712 "uuid": "ee54e51e-5774-4ead-9cf3-e14c1d4b220b", 00:15:44.712 "is_configured": true, 00:15:44.712 "data_offset": 0, 00:15:44.712 "data_size": 65536 00:15:44.712 }, 00:15:44.712 { 00:15:44.712 "name": "BaseBdev3", 00:15:44.712 "uuid": "b67796bb-c748-4564-8dea-1dcfdfcec9e6", 00:15:44.712 "is_configured": true, 00:15:44.712 "data_offset": 0, 00:15:44.712 "data_size": 65536 00:15:44.712 }, 00:15:44.712 { 00:15:44.712 "name": "BaseBdev4", 00:15:44.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.712 "is_configured": false, 00:15:44.712 "data_offset": 0, 00:15:44.712 "data_size": 0 00:15:44.712 } 00:15:44.712 ] 00:15:44.712 }' 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.712 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.971 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:44.971 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.971 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.971 [2024-11-26 20:28:38.510042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:44.971 [2024-11-26 20:28:38.510104] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:44.971 [2024-11-26 20:28:38.510112] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:44.971 [2024-11-26 20:28:38.510381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:44.971 [2024-11-26 20:28:38.510857] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:44.971 [2024-11-26 20:28:38.510879] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:44.971 [2024-11-26 20:28:38.511075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.972 BaseBdev4 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.972 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.232 [ 00:15:45.232 { 00:15:45.232 "name": "BaseBdev4", 00:15:45.232 "aliases": [ 00:15:45.232 "b035e858-3c42-4625-ac23-40e3db8e6dc7" 00:15:45.232 ], 00:15:45.232 "product_name": "Malloc disk", 00:15:45.232 "block_size": 512, 00:15:45.232 "num_blocks": 65536, 00:15:45.232 "uuid": "b035e858-3c42-4625-ac23-40e3db8e6dc7", 00:15:45.232 "assigned_rate_limits": { 00:15:45.232 "rw_ios_per_sec": 0, 00:15:45.232 "rw_mbytes_per_sec": 0, 00:15:45.232 "r_mbytes_per_sec": 0, 00:15:45.232 "w_mbytes_per_sec": 0 00:15:45.232 }, 00:15:45.232 "claimed": true, 00:15:45.232 "claim_type": "exclusive_write", 00:15:45.232 "zoned": false, 00:15:45.232 "supported_io_types": { 00:15:45.232 "read": true, 00:15:45.232 "write": true, 00:15:45.232 "unmap": true, 00:15:45.232 "flush": true, 00:15:45.232 "reset": true, 00:15:45.232 "nvme_admin": false, 00:15:45.232 "nvme_io": false, 00:15:45.232 "nvme_io_md": false, 00:15:45.232 "write_zeroes": true, 00:15:45.232 "zcopy": true, 00:15:45.232 "get_zone_info": false, 00:15:45.232 "zone_management": false, 00:15:45.232 "zone_append": false, 00:15:45.232 "compare": false, 00:15:45.232 "compare_and_write": false, 00:15:45.232 "abort": true, 00:15:45.232 "seek_hole": false, 00:15:45.232 "seek_data": false, 00:15:45.232 "copy": true, 00:15:45.232 "nvme_iov_md": false 00:15:45.232 }, 00:15:45.232 "memory_domains": [ 00:15:45.232 { 00:15:45.232 "dma_device_id": "system", 00:15:45.232 "dma_device_type": 1 00:15:45.232 }, 00:15:45.232 { 00:15:45.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.232 "dma_device_type": 2 00:15:45.232 } 00:15:45.232 ], 00:15:45.232 "driver_specific": {} 00:15:45.232 } 00:15:45.232 ] 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.232 "name": "Existed_Raid", 00:15:45.232 "uuid": "0c22bc73-dd56-4bef-8412-9a84fe1618e7", 00:15:45.232 "strip_size_kb": 64, 00:15:45.232 "state": "online", 00:15:45.232 "raid_level": "raid5f", 00:15:45.232 "superblock": false, 00:15:45.232 "num_base_bdevs": 4, 00:15:45.232 "num_base_bdevs_discovered": 4, 00:15:45.232 "num_base_bdevs_operational": 4, 00:15:45.232 "base_bdevs_list": [ 00:15:45.232 { 00:15:45.232 "name": "BaseBdev1", 00:15:45.232 "uuid": "9fb76b92-7659-4521-8246-11b987ea67ce", 00:15:45.232 "is_configured": true, 00:15:45.232 "data_offset": 0, 00:15:45.232 "data_size": 65536 00:15:45.232 }, 00:15:45.232 { 00:15:45.232 "name": "BaseBdev2", 00:15:45.232 "uuid": "ee54e51e-5774-4ead-9cf3-e14c1d4b220b", 00:15:45.232 "is_configured": true, 00:15:45.232 "data_offset": 0, 00:15:45.232 "data_size": 65536 00:15:45.232 }, 00:15:45.232 { 00:15:45.232 "name": "BaseBdev3", 00:15:45.232 "uuid": "b67796bb-c748-4564-8dea-1dcfdfcec9e6", 00:15:45.232 "is_configured": true, 00:15:45.232 "data_offset": 0, 00:15:45.232 "data_size": 65536 00:15:45.232 }, 00:15:45.232 { 00:15:45.232 "name": "BaseBdev4", 00:15:45.232 "uuid": "b035e858-3c42-4625-ac23-40e3db8e6dc7", 00:15:45.232 "is_configured": true, 00:15:45.232 "data_offset": 0, 00:15:45.232 "data_size": 65536 00:15:45.232 } 00:15:45.232 ] 00:15:45.232 }' 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.232 20:28:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.492 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.492 [2024-11-26 20:28:39.033535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.752 "name": "Existed_Raid", 00:15:45.752 "aliases": [ 00:15:45.752 "0c22bc73-dd56-4bef-8412-9a84fe1618e7" 00:15:45.752 ], 00:15:45.752 "product_name": "Raid Volume", 00:15:45.752 "block_size": 512, 00:15:45.752 "num_blocks": 196608, 00:15:45.752 "uuid": "0c22bc73-dd56-4bef-8412-9a84fe1618e7", 00:15:45.752 "assigned_rate_limits": { 00:15:45.752 "rw_ios_per_sec": 0, 00:15:45.752 "rw_mbytes_per_sec": 0, 00:15:45.752 "r_mbytes_per_sec": 0, 00:15:45.752 "w_mbytes_per_sec": 0 00:15:45.752 }, 00:15:45.752 "claimed": false, 00:15:45.752 "zoned": false, 00:15:45.752 "supported_io_types": { 00:15:45.752 "read": true, 00:15:45.752 "write": true, 00:15:45.752 "unmap": false, 00:15:45.752 "flush": false, 00:15:45.752 "reset": true, 00:15:45.752 "nvme_admin": false, 00:15:45.752 "nvme_io": false, 00:15:45.752 "nvme_io_md": false, 00:15:45.752 "write_zeroes": true, 00:15:45.752 "zcopy": false, 00:15:45.752 "get_zone_info": false, 00:15:45.752 "zone_management": false, 00:15:45.752 "zone_append": false, 00:15:45.752 "compare": false, 00:15:45.752 "compare_and_write": false, 00:15:45.752 "abort": false, 00:15:45.752 "seek_hole": false, 00:15:45.752 "seek_data": false, 00:15:45.752 "copy": false, 00:15:45.752 "nvme_iov_md": false 00:15:45.752 }, 00:15:45.752 "driver_specific": { 00:15:45.752 "raid": { 00:15:45.752 "uuid": "0c22bc73-dd56-4bef-8412-9a84fe1618e7", 00:15:45.752 "strip_size_kb": 64, 00:15:45.752 "state": "online", 00:15:45.752 "raid_level": "raid5f", 00:15:45.752 "superblock": false, 00:15:45.752 "num_base_bdevs": 4, 00:15:45.752 "num_base_bdevs_discovered": 4, 00:15:45.752 "num_base_bdevs_operational": 4, 00:15:45.752 "base_bdevs_list": [ 00:15:45.752 { 00:15:45.752 "name": "BaseBdev1", 00:15:45.752 "uuid": "9fb76b92-7659-4521-8246-11b987ea67ce", 00:15:45.752 "is_configured": true, 00:15:45.752 "data_offset": 0, 00:15:45.752 "data_size": 65536 00:15:45.752 }, 00:15:45.752 { 00:15:45.752 "name": "BaseBdev2", 00:15:45.752 "uuid": "ee54e51e-5774-4ead-9cf3-e14c1d4b220b", 00:15:45.752 "is_configured": true, 00:15:45.752 "data_offset": 0, 00:15:45.752 "data_size": 65536 00:15:45.752 }, 00:15:45.752 { 00:15:45.752 "name": "BaseBdev3", 00:15:45.752 "uuid": "b67796bb-c748-4564-8dea-1dcfdfcec9e6", 00:15:45.752 "is_configured": true, 00:15:45.752 "data_offset": 0, 00:15:45.752 "data_size": 65536 00:15:45.752 }, 00:15:45.752 { 00:15:45.752 "name": "BaseBdev4", 00:15:45.752 "uuid": "b035e858-3c42-4625-ac23-40e3db8e6dc7", 00:15:45.752 "is_configured": true, 00:15:45.752 "data_offset": 0, 00:15:45.752 "data_size": 65536 00:15:45.752 } 00:15:45.752 ] 00:15:45.752 } 00:15:45.752 } 00:15:45.752 }' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:45.752 BaseBdev2 00:15:45.752 BaseBdev3 00:15:45.752 BaseBdev4' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.752 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.753 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.012 [2024-11-26 20:28:39.308906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.012 "name": "Existed_Raid", 00:15:46.012 "uuid": "0c22bc73-dd56-4bef-8412-9a84fe1618e7", 00:15:46.012 "strip_size_kb": 64, 00:15:46.012 "state": "online", 00:15:46.012 "raid_level": "raid5f", 00:15:46.012 "superblock": false, 00:15:46.012 "num_base_bdevs": 4, 00:15:46.012 "num_base_bdevs_discovered": 3, 00:15:46.012 "num_base_bdevs_operational": 3, 00:15:46.012 "base_bdevs_list": [ 00:15:46.012 { 00:15:46.012 "name": null, 00:15:46.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.012 "is_configured": false, 00:15:46.012 "data_offset": 0, 00:15:46.012 "data_size": 65536 00:15:46.012 }, 00:15:46.012 { 00:15:46.012 "name": "BaseBdev2", 00:15:46.012 "uuid": "ee54e51e-5774-4ead-9cf3-e14c1d4b220b", 00:15:46.012 "is_configured": true, 00:15:46.012 "data_offset": 0, 00:15:46.012 "data_size": 65536 00:15:46.012 }, 00:15:46.012 { 00:15:46.012 "name": "BaseBdev3", 00:15:46.012 "uuid": "b67796bb-c748-4564-8dea-1dcfdfcec9e6", 00:15:46.012 "is_configured": true, 00:15:46.012 "data_offset": 0, 00:15:46.012 "data_size": 65536 00:15:46.012 }, 00:15:46.012 { 00:15:46.012 "name": "BaseBdev4", 00:15:46.012 "uuid": "b035e858-3c42-4625-ac23-40e3db8e6dc7", 00:15:46.012 "is_configured": true, 00:15:46.012 "data_offset": 0, 00:15:46.012 "data_size": 65536 00:15:46.012 } 00:15:46.012 ] 00:15:46.012 }' 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.012 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.272 [2024-11-26 20:28:39.780896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.272 [2024-11-26 20:28:39.781011] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.272 [2024-11-26 20:28:39.801731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.272 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.532 [2024-11-26 20:28:39.857724] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.532 [2024-11-26 20:28:39.933681] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:46.532 [2024-11-26 20:28:39.933732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:46.532 20:28:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.532 BaseBdev2 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.532 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.532 [ 00:15:46.532 { 00:15:46.532 "name": "BaseBdev2", 00:15:46.532 "aliases": [ 00:15:46.532 "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a" 00:15:46.532 ], 00:15:46.532 "product_name": "Malloc disk", 00:15:46.532 "block_size": 512, 00:15:46.532 "num_blocks": 65536, 00:15:46.532 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:46.532 "assigned_rate_limits": { 00:15:46.532 "rw_ios_per_sec": 0, 00:15:46.532 "rw_mbytes_per_sec": 0, 00:15:46.532 "r_mbytes_per_sec": 0, 00:15:46.532 "w_mbytes_per_sec": 0 00:15:46.532 }, 00:15:46.532 "claimed": false, 00:15:46.532 "zoned": false, 00:15:46.532 "supported_io_types": { 00:15:46.532 "read": true, 00:15:46.532 "write": true, 00:15:46.532 "unmap": true, 00:15:46.532 "flush": true, 00:15:46.532 "reset": true, 00:15:46.532 "nvme_admin": false, 00:15:46.532 "nvme_io": false, 00:15:46.532 "nvme_io_md": false, 00:15:46.532 "write_zeroes": true, 00:15:46.532 "zcopy": true, 00:15:46.532 "get_zone_info": false, 00:15:46.532 "zone_management": false, 00:15:46.532 "zone_append": false, 00:15:46.532 "compare": false, 00:15:46.532 "compare_and_write": false, 00:15:46.532 "abort": true, 00:15:46.532 "seek_hole": false, 00:15:46.532 "seek_data": false, 00:15:46.532 "copy": true, 00:15:46.532 "nvme_iov_md": false 00:15:46.532 }, 00:15:46.532 "memory_domains": [ 00:15:46.532 { 00:15:46.532 "dma_device_id": "system", 00:15:46.532 "dma_device_type": 1 00:15:46.532 }, 00:15:46.533 { 00:15:46.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.533 "dma_device_type": 2 00:15:46.533 } 00:15:46.533 ], 00:15:46.533 "driver_specific": {} 00:15:46.533 } 00:15:46.533 ] 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.533 BaseBdev3 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.533 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.794 [ 00:15:46.794 { 00:15:46.794 "name": "BaseBdev3", 00:15:46.794 "aliases": [ 00:15:46.794 "64e502a3-9d5c-40f1-9c57-52d721febac8" 00:15:46.794 ], 00:15:46.794 "product_name": "Malloc disk", 00:15:46.794 "block_size": 512, 00:15:46.794 "num_blocks": 65536, 00:15:46.794 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:46.794 "assigned_rate_limits": { 00:15:46.794 "rw_ios_per_sec": 0, 00:15:46.794 "rw_mbytes_per_sec": 0, 00:15:46.794 "r_mbytes_per_sec": 0, 00:15:46.794 "w_mbytes_per_sec": 0 00:15:46.794 }, 00:15:46.794 "claimed": false, 00:15:46.794 "zoned": false, 00:15:46.794 "supported_io_types": { 00:15:46.794 "read": true, 00:15:46.794 "write": true, 00:15:46.794 "unmap": true, 00:15:46.794 "flush": true, 00:15:46.794 "reset": true, 00:15:46.794 "nvme_admin": false, 00:15:46.794 "nvme_io": false, 00:15:46.794 "nvme_io_md": false, 00:15:46.794 "write_zeroes": true, 00:15:46.794 "zcopy": true, 00:15:46.794 "get_zone_info": false, 00:15:46.794 "zone_management": false, 00:15:46.794 "zone_append": false, 00:15:46.794 "compare": false, 00:15:46.794 "compare_and_write": false, 00:15:46.794 "abort": true, 00:15:46.794 "seek_hole": false, 00:15:46.794 "seek_data": false, 00:15:46.794 "copy": true, 00:15:46.794 "nvme_iov_md": false 00:15:46.794 }, 00:15:46.794 "memory_domains": [ 00:15:46.794 { 00:15:46.794 "dma_device_id": "system", 00:15:46.794 "dma_device_type": 1 00:15:46.794 }, 00:15:46.794 { 00:15:46.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.794 "dma_device_type": 2 00:15:46.794 } 00:15:46.794 ], 00:15:46.794 "driver_specific": {} 00:15:46.794 } 00:15:46.794 ] 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.794 BaseBdev4 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.794 [ 00:15:46.794 { 00:15:46.794 "name": "BaseBdev4", 00:15:46.794 "aliases": [ 00:15:46.794 "8260db5d-29b3-4c42-959f-d4d020164934" 00:15:46.794 ], 00:15:46.794 "product_name": "Malloc disk", 00:15:46.794 "block_size": 512, 00:15:46.794 "num_blocks": 65536, 00:15:46.794 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:46.794 "assigned_rate_limits": { 00:15:46.794 "rw_ios_per_sec": 0, 00:15:46.794 "rw_mbytes_per_sec": 0, 00:15:46.794 "r_mbytes_per_sec": 0, 00:15:46.794 "w_mbytes_per_sec": 0 00:15:46.794 }, 00:15:46.794 "claimed": false, 00:15:46.794 "zoned": false, 00:15:46.794 "supported_io_types": { 00:15:46.794 "read": true, 00:15:46.794 "write": true, 00:15:46.794 "unmap": true, 00:15:46.794 "flush": true, 00:15:46.794 "reset": true, 00:15:46.794 "nvme_admin": false, 00:15:46.794 "nvme_io": false, 00:15:46.794 "nvme_io_md": false, 00:15:46.794 "write_zeroes": true, 00:15:46.794 "zcopy": true, 00:15:46.794 "get_zone_info": false, 00:15:46.794 "zone_management": false, 00:15:46.794 "zone_append": false, 00:15:46.794 "compare": false, 00:15:46.794 "compare_and_write": false, 00:15:46.794 "abort": true, 00:15:46.794 "seek_hole": false, 00:15:46.794 "seek_data": false, 00:15:46.794 "copy": true, 00:15:46.794 "nvme_iov_md": false 00:15:46.794 }, 00:15:46.794 "memory_domains": [ 00:15:46.794 { 00:15:46.794 "dma_device_id": "system", 00:15:46.794 "dma_device_type": 1 00:15:46.794 }, 00:15:46.794 { 00:15:46.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.794 "dma_device_type": 2 00:15:46.794 } 00:15:46.794 ], 00:15:46.794 "driver_specific": {} 00:15:46.794 } 00:15:46.794 ] 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.794 [2024-11-26 20:28:40.152450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.794 [2024-11-26 20:28:40.152498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.794 [2024-11-26 20:28:40.152518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.794 [2024-11-26 20:28:40.154394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.794 [2024-11-26 20:28:40.154446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:46.794 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.795 "name": "Existed_Raid", 00:15:46.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.795 "strip_size_kb": 64, 00:15:46.795 "state": "configuring", 00:15:46.795 "raid_level": "raid5f", 00:15:46.795 "superblock": false, 00:15:46.795 "num_base_bdevs": 4, 00:15:46.795 "num_base_bdevs_discovered": 3, 00:15:46.795 "num_base_bdevs_operational": 4, 00:15:46.795 "base_bdevs_list": [ 00:15:46.795 { 00:15:46.795 "name": "BaseBdev1", 00:15:46.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.795 "is_configured": false, 00:15:46.795 "data_offset": 0, 00:15:46.795 "data_size": 0 00:15:46.795 }, 00:15:46.795 { 00:15:46.795 "name": "BaseBdev2", 00:15:46.795 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:46.795 "is_configured": true, 00:15:46.795 "data_offset": 0, 00:15:46.795 "data_size": 65536 00:15:46.795 }, 00:15:46.795 { 00:15:46.795 "name": "BaseBdev3", 00:15:46.795 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:46.795 "is_configured": true, 00:15:46.795 "data_offset": 0, 00:15:46.795 "data_size": 65536 00:15:46.795 }, 00:15:46.795 { 00:15:46.795 "name": "BaseBdev4", 00:15:46.795 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:46.795 "is_configured": true, 00:15:46.795 "data_offset": 0, 00:15:46.795 "data_size": 65536 00:15:46.795 } 00:15:46.795 ] 00:15:46.795 }' 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.795 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.056 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:47.056 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.056 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.338 [2024-11-26 20:28:40.607754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.338 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.338 "name": "Existed_Raid", 00:15:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.338 "strip_size_kb": 64, 00:15:47.338 "state": "configuring", 00:15:47.338 "raid_level": "raid5f", 00:15:47.338 "superblock": false, 00:15:47.338 "num_base_bdevs": 4, 00:15:47.338 "num_base_bdevs_discovered": 2, 00:15:47.338 "num_base_bdevs_operational": 4, 00:15:47.338 "base_bdevs_list": [ 00:15:47.338 { 00:15:47.338 "name": "BaseBdev1", 00:15:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.338 "is_configured": false, 00:15:47.338 "data_offset": 0, 00:15:47.338 "data_size": 0 00:15:47.338 }, 00:15:47.338 { 00:15:47.338 "name": null, 00:15:47.338 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:47.338 "is_configured": false, 00:15:47.338 "data_offset": 0, 00:15:47.338 "data_size": 65536 00:15:47.338 }, 00:15:47.338 { 00:15:47.338 "name": "BaseBdev3", 00:15:47.338 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:47.338 "is_configured": true, 00:15:47.338 "data_offset": 0, 00:15:47.338 "data_size": 65536 00:15:47.338 }, 00:15:47.338 { 00:15:47.338 "name": "BaseBdev4", 00:15:47.338 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:47.338 "is_configured": true, 00:15:47.338 "data_offset": 0, 00:15:47.339 "data_size": 65536 00:15:47.339 } 00:15:47.339 ] 00:15:47.339 }' 00:15:47.339 20:28:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.339 20:28:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.599 [2024-11-26 20:28:41.122787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.599 BaseBdev1 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.599 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.599 [ 00:15:47.599 { 00:15:47.599 "name": "BaseBdev1", 00:15:47.599 "aliases": [ 00:15:47.599 "d455b2a2-8692-41dd-8e94-073d91074a0b" 00:15:47.599 ], 00:15:47.599 "product_name": "Malloc disk", 00:15:47.599 "block_size": 512, 00:15:47.599 "num_blocks": 65536, 00:15:47.599 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:47.599 "assigned_rate_limits": { 00:15:47.859 "rw_ios_per_sec": 0, 00:15:47.859 "rw_mbytes_per_sec": 0, 00:15:47.859 "r_mbytes_per_sec": 0, 00:15:47.859 "w_mbytes_per_sec": 0 00:15:47.859 }, 00:15:47.859 "claimed": true, 00:15:47.859 "claim_type": "exclusive_write", 00:15:47.859 "zoned": false, 00:15:47.859 "supported_io_types": { 00:15:47.859 "read": true, 00:15:47.859 "write": true, 00:15:47.859 "unmap": true, 00:15:47.859 "flush": true, 00:15:47.859 "reset": true, 00:15:47.859 "nvme_admin": false, 00:15:47.859 "nvme_io": false, 00:15:47.859 "nvme_io_md": false, 00:15:47.859 "write_zeroes": true, 00:15:47.859 "zcopy": true, 00:15:47.859 "get_zone_info": false, 00:15:47.859 "zone_management": false, 00:15:47.859 "zone_append": false, 00:15:47.859 "compare": false, 00:15:47.859 "compare_and_write": false, 00:15:47.859 "abort": true, 00:15:47.859 "seek_hole": false, 00:15:47.859 "seek_data": false, 00:15:47.859 "copy": true, 00:15:47.859 "nvme_iov_md": false 00:15:47.859 }, 00:15:47.859 "memory_domains": [ 00:15:47.859 { 00:15:47.859 "dma_device_id": "system", 00:15:47.859 "dma_device_type": 1 00:15:47.859 }, 00:15:47.859 { 00:15:47.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.859 "dma_device_type": 2 00:15:47.859 } 00:15:47.859 ], 00:15:47.859 "driver_specific": {} 00:15:47.859 } 00:15:47.859 ] 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.859 "name": "Existed_Raid", 00:15:47.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.859 "strip_size_kb": 64, 00:15:47.859 "state": "configuring", 00:15:47.859 "raid_level": "raid5f", 00:15:47.859 "superblock": false, 00:15:47.859 "num_base_bdevs": 4, 00:15:47.859 "num_base_bdevs_discovered": 3, 00:15:47.859 "num_base_bdevs_operational": 4, 00:15:47.859 "base_bdevs_list": [ 00:15:47.859 { 00:15:47.859 "name": "BaseBdev1", 00:15:47.859 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:47.859 "is_configured": true, 00:15:47.859 "data_offset": 0, 00:15:47.859 "data_size": 65536 00:15:47.859 }, 00:15:47.859 { 00:15:47.859 "name": null, 00:15:47.859 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:47.859 "is_configured": false, 00:15:47.859 "data_offset": 0, 00:15:47.859 "data_size": 65536 00:15:47.859 }, 00:15:47.859 { 00:15:47.859 "name": "BaseBdev3", 00:15:47.859 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:47.859 "is_configured": true, 00:15:47.859 "data_offset": 0, 00:15:47.859 "data_size": 65536 00:15:47.859 }, 00:15:47.859 { 00:15:47.859 "name": "BaseBdev4", 00:15:47.859 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:47.859 "is_configured": true, 00:15:47.859 "data_offset": 0, 00:15:47.859 "data_size": 65536 00:15:47.859 } 00:15:47.859 ] 00:15:47.859 }' 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.859 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.118 [2024-11-26 20:28:41.653984] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.118 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.119 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.378 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.378 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.378 "name": "Existed_Raid", 00:15:48.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.378 "strip_size_kb": 64, 00:15:48.378 "state": "configuring", 00:15:48.378 "raid_level": "raid5f", 00:15:48.378 "superblock": false, 00:15:48.378 "num_base_bdevs": 4, 00:15:48.378 "num_base_bdevs_discovered": 2, 00:15:48.378 "num_base_bdevs_operational": 4, 00:15:48.378 "base_bdevs_list": [ 00:15:48.378 { 00:15:48.378 "name": "BaseBdev1", 00:15:48.378 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:48.378 "is_configured": true, 00:15:48.378 "data_offset": 0, 00:15:48.378 "data_size": 65536 00:15:48.378 }, 00:15:48.378 { 00:15:48.378 "name": null, 00:15:48.378 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:48.378 "is_configured": false, 00:15:48.378 "data_offset": 0, 00:15:48.378 "data_size": 65536 00:15:48.378 }, 00:15:48.378 { 00:15:48.378 "name": null, 00:15:48.378 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:48.378 "is_configured": false, 00:15:48.378 "data_offset": 0, 00:15:48.378 "data_size": 65536 00:15:48.378 }, 00:15:48.378 { 00:15:48.378 "name": "BaseBdev4", 00:15:48.378 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:48.378 "is_configured": true, 00:15:48.378 "data_offset": 0, 00:15:48.378 "data_size": 65536 00:15:48.378 } 00:15:48.378 ] 00:15:48.378 }' 00:15:48.378 20:28:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.378 20:28:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.637 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.637 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:48.637 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.638 [2024-11-26 20:28:42.157224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.638 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.897 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.897 "name": "Existed_Raid", 00:15:48.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.897 "strip_size_kb": 64, 00:15:48.897 "state": "configuring", 00:15:48.897 "raid_level": "raid5f", 00:15:48.897 "superblock": false, 00:15:48.897 "num_base_bdevs": 4, 00:15:48.897 "num_base_bdevs_discovered": 3, 00:15:48.897 "num_base_bdevs_operational": 4, 00:15:48.897 "base_bdevs_list": [ 00:15:48.897 { 00:15:48.897 "name": "BaseBdev1", 00:15:48.897 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:48.897 "is_configured": true, 00:15:48.897 "data_offset": 0, 00:15:48.897 "data_size": 65536 00:15:48.897 }, 00:15:48.897 { 00:15:48.897 "name": null, 00:15:48.897 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:48.897 "is_configured": false, 00:15:48.897 "data_offset": 0, 00:15:48.897 "data_size": 65536 00:15:48.897 }, 00:15:48.897 { 00:15:48.897 "name": "BaseBdev3", 00:15:48.897 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:48.897 "is_configured": true, 00:15:48.897 "data_offset": 0, 00:15:48.897 "data_size": 65536 00:15:48.897 }, 00:15:48.897 { 00:15:48.897 "name": "BaseBdev4", 00:15:48.897 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:48.897 "is_configured": true, 00:15:48.897 "data_offset": 0, 00:15:48.897 "data_size": 65536 00:15:48.897 } 00:15:48.897 ] 00:15:48.897 }' 00:15:48.897 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.897 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.157 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.157 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:49.157 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.157 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.157 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.158 [2024-11-26 20:28:42.604488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.158 "name": "Existed_Raid", 00:15:49.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.158 "strip_size_kb": 64, 00:15:49.158 "state": "configuring", 00:15:49.158 "raid_level": "raid5f", 00:15:49.158 "superblock": false, 00:15:49.158 "num_base_bdevs": 4, 00:15:49.158 "num_base_bdevs_discovered": 2, 00:15:49.158 "num_base_bdevs_operational": 4, 00:15:49.158 "base_bdevs_list": [ 00:15:49.158 { 00:15:49.158 "name": null, 00:15:49.158 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:49.158 "is_configured": false, 00:15:49.158 "data_offset": 0, 00:15:49.158 "data_size": 65536 00:15:49.158 }, 00:15:49.158 { 00:15:49.158 "name": null, 00:15:49.158 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:49.158 "is_configured": false, 00:15:49.158 "data_offset": 0, 00:15:49.158 "data_size": 65536 00:15:49.158 }, 00:15:49.158 { 00:15:49.158 "name": "BaseBdev3", 00:15:49.158 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:49.158 "is_configured": true, 00:15:49.158 "data_offset": 0, 00:15:49.158 "data_size": 65536 00:15:49.158 }, 00:15:49.158 { 00:15:49.158 "name": "BaseBdev4", 00:15:49.158 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:49.158 "is_configured": true, 00:15:49.158 "data_offset": 0, 00:15:49.158 "data_size": 65536 00:15:49.158 } 00:15:49.158 ] 00:15:49.158 }' 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.158 20:28:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.770 [2024-11-26 20:28:43.075171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.770 "name": "Existed_Raid", 00:15:49.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.770 "strip_size_kb": 64, 00:15:49.770 "state": "configuring", 00:15:49.770 "raid_level": "raid5f", 00:15:49.770 "superblock": false, 00:15:49.770 "num_base_bdevs": 4, 00:15:49.770 "num_base_bdevs_discovered": 3, 00:15:49.770 "num_base_bdevs_operational": 4, 00:15:49.770 "base_bdevs_list": [ 00:15:49.770 { 00:15:49.770 "name": null, 00:15:49.770 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:49.770 "is_configured": false, 00:15:49.770 "data_offset": 0, 00:15:49.770 "data_size": 65536 00:15:49.770 }, 00:15:49.770 { 00:15:49.770 "name": "BaseBdev2", 00:15:49.770 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:49.770 "is_configured": true, 00:15:49.770 "data_offset": 0, 00:15:49.770 "data_size": 65536 00:15:49.770 }, 00:15:49.770 { 00:15:49.770 "name": "BaseBdev3", 00:15:49.770 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:49.770 "is_configured": true, 00:15:49.770 "data_offset": 0, 00:15:49.770 "data_size": 65536 00:15:49.770 }, 00:15:49.770 { 00:15:49.770 "name": "BaseBdev4", 00:15:49.770 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:49.770 "is_configured": true, 00:15:49.770 "data_offset": 0, 00:15:49.770 "data_size": 65536 00:15:49.770 } 00:15:49.770 ] 00:15:49.770 }' 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.770 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d455b2a2-8692-41dd-8e94-073d91074a0b 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.028 [2024-11-26 20:28:43.574092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:50.028 [2024-11-26 20:28:43.574145] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:50.028 [2024-11-26 20:28:43.574153] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:50.028 [2024-11-26 20:28:43.574411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:50.028 [2024-11-26 20:28:43.574887] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:50.028 [2024-11-26 20:28:43.574910] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:15:50.028 [2024-11-26 20:28:43.575080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.028 NewBaseBdev 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.028 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.287 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.287 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:50.287 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.287 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.287 [ 00:15:50.287 { 00:15:50.287 "name": "NewBaseBdev", 00:15:50.287 "aliases": [ 00:15:50.287 "d455b2a2-8692-41dd-8e94-073d91074a0b" 00:15:50.287 ], 00:15:50.287 "product_name": "Malloc disk", 00:15:50.287 "block_size": 512, 00:15:50.287 "num_blocks": 65536, 00:15:50.287 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:50.287 "assigned_rate_limits": { 00:15:50.287 "rw_ios_per_sec": 0, 00:15:50.287 "rw_mbytes_per_sec": 0, 00:15:50.287 "r_mbytes_per_sec": 0, 00:15:50.287 "w_mbytes_per_sec": 0 00:15:50.287 }, 00:15:50.287 "claimed": true, 00:15:50.287 "claim_type": "exclusive_write", 00:15:50.287 "zoned": false, 00:15:50.287 "supported_io_types": { 00:15:50.287 "read": true, 00:15:50.287 "write": true, 00:15:50.287 "unmap": true, 00:15:50.287 "flush": true, 00:15:50.287 "reset": true, 00:15:50.287 "nvme_admin": false, 00:15:50.287 "nvme_io": false, 00:15:50.287 "nvme_io_md": false, 00:15:50.287 "write_zeroes": true, 00:15:50.287 "zcopy": true, 00:15:50.287 "get_zone_info": false, 00:15:50.287 "zone_management": false, 00:15:50.287 "zone_append": false, 00:15:50.288 "compare": false, 00:15:50.288 "compare_and_write": false, 00:15:50.288 "abort": true, 00:15:50.288 "seek_hole": false, 00:15:50.288 "seek_data": false, 00:15:50.288 "copy": true, 00:15:50.288 "nvme_iov_md": false 00:15:50.288 }, 00:15:50.288 "memory_domains": [ 00:15:50.288 { 00:15:50.288 "dma_device_id": "system", 00:15:50.288 "dma_device_type": 1 00:15:50.288 }, 00:15:50.288 { 00:15:50.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.288 "dma_device_type": 2 00:15:50.288 } 00:15:50.288 ], 00:15:50.288 "driver_specific": {} 00:15:50.288 } 00:15:50.288 ] 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.288 "name": "Existed_Raid", 00:15:50.288 "uuid": "d5f54605-3166-4373-a6f0-5f2633cd6834", 00:15:50.288 "strip_size_kb": 64, 00:15:50.288 "state": "online", 00:15:50.288 "raid_level": "raid5f", 00:15:50.288 "superblock": false, 00:15:50.288 "num_base_bdevs": 4, 00:15:50.288 "num_base_bdevs_discovered": 4, 00:15:50.288 "num_base_bdevs_operational": 4, 00:15:50.288 "base_bdevs_list": [ 00:15:50.288 { 00:15:50.288 "name": "NewBaseBdev", 00:15:50.288 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:50.288 "is_configured": true, 00:15:50.288 "data_offset": 0, 00:15:50.288 "data_size": 65536 00:15:50.288 }, 00:15:50.288 { 00:15:50.288 "name": "BaseBdev2", 00:15:50.288 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:50.288 "is_configured": true, 00:15:50.288 "data_offset": 0, 00:15:50.288 "data_size": 65536 00:15:50.288 }, 00:15:50.288 { 00:15:50.288 "name": "BaseBdev3", 00:15:50.288 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:50.288 "is_configured": true, 00:15:50.288 "data_offset": 0, 00:15:50.288 "data_size": 65536 00:15:50.288 }, 00:15:50.288 { 00:15:50.288 "name": "BaseBdev4", 00:15:50.288 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:50.288 "is_configured": true, 00:15:50.288 "data_offset": 0, 00:15:50.288 "data_size": 65536 00:15:50.288 } 00:15:50.288 ] 00:15:50.288 }' 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.288 20:28:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.546 [2024-11-26 20:28:44.073567] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.546 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.806 "name": "Existed_Raid", 00:15:50.806 "aliases": [ 00:15:50.806 "d5f54605-3166-4373-a6f0-5f2633cd6834" 00:15:50.806 ], 00:15:50.806 "product_name": "Raid Volume", 00:15:50.806 "block_size": 512, 00:15:50.806 "num_blocks": 196608, 00:15:50.806 "uuid": "d5f54605-3166-4373-a6f0-5f2633cd6834", 00:15:50.806 "assigned_rate_limits": { 00:15:50.806 "rw_ios_per_sec": 0, 00:15:50.806 "rw_mbytes_per_sec": 0, 00:15:50.806 "r_mbytes_per_sec": 0, 00:15:50.806 "w_mbytes_per_sec": 0 00:15:50.806 }, 00:15:50.806 "claimed": false, 00:15:50.806 "zoned": false, 00:15:50.806 "supported_io_types": { 00:15:50.806 "read": true, 00:15:50.806 "write": true, 00:15:50.806 "unmap": false, 00:15:50.806 "flush": false, 00:15:50.806 "reset": true, 00:15:50.806 "nvme_admin": false, 00:15:50.806 "nvme_io": false, 00:15:50.806 "nvme_io_md": false, 00:15:50.806 "write_zeroes": true, 00:15:50.806 "zcopy": false, 00:15:50.806 "get_zone_info": false, 00:15:50.806 "zone_management": false, 00:15:50.806 "zone_append": false, 00:15:50.806 "compare": false, 00:15:50.806 "compare_and_write": false, 00:15:50.806 "abort": false, 00:15:50.806 "seek_hole": false, 00:15:50.806 "seek_data": false, 00:15:50.806 "copy": false, 00:15:50.806 "nvme_iov_md": false 00:15:50.806 }, 00:15:50.806 "driver_specific": { 00:15:50.806 "raid": { 00:15:50.806 "uuid": "d5f54605-3166-4373-a6f0-5f2633cd6834", 00:15:50.806 "strip_size_kb": 64, 00:15:50.806 "state": "online", 00:15:50.806 "raid_level": "raid5f", 00:15:50.806 "superblock": false, 00:15:50.806 "num_base_bdevs": 4, 00:15:50.806 "num_base_bdevs_discovered": 4, 00:15:50.806 "num_base_bdevs_operational": 4, 00:15:50.806 "base_bdevs_list": [ 00:15:50.806 { 00:15:50.806 "name": "NewBaseBdev", 00:15:50.806 "uuid": "d455b2a2-8692-41dd-8e94-073d91074a0b", 00:15:50.806 "is_configured": true, 00:15:50.806 "data_offset": 0, 00:15:50.806 "data_size": 65536 00:15:50.806 }, 00:15:50.806 { 00:15:50.806 "name": "BaseBdev2", 00:15:50.806 "uuid": "3e3b5e0b-e92f-417d-8cbb-be25acdbca7a", 00:15:50.806 "is_configured": true, 00:15:50.806 "data_offset": 0, 00:15:50.806 "data_size": 65536 00:15:50.806 }, 00:15:50.806 { 00:15:50.806 "name": "BaseBdev3", 00:15:50.806 "uuid": "64e502a3-9d5c-40f1-9c57-52d721febac8", 00:15:50.806 "is_configured": true, 00:15:50.806 "data_offset": 0, 00:15:50.806 "data_size": 65536 00:15:50.806 }, 00:15:50.806 { 00:15:50.806 "name": "BaseBdev4", 00:15:50.806 "uuid": "8260db5d-29b3-4c42-959f-d4d020164934", 00:15:50.806 "is_configured": true, 00:15:50.806 "data_offset": 0, 00:15:50.806 "data_size": 65536 00:15:50.806 } 00:15:50.806 ] 00:15:50.806 } 00:15:50.806 } 00:15:50.806 }' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:50.806 BaseBdev2 00:15:50.806 BaseBdev3 00:15:50.806 BaseBdev4' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.806 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.066 [2024-11-26 20:28:44.384785] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.066 [2024-11-26 20:28:44.384839] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.066 [2024-11-26 20:28:44.384927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.066 [2024-11-26 20:28:44.385229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.066 [2024-11-26 20:28:44.385248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93835 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93835 ']' 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93835 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93835 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.066 killing process with pid 93835 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93835' 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93835 00:15:51.066 [2024-11-26 20:28:44.422566] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.066 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93835 00:15:51.066 [2024-11-26 20:28:44.490590] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.325 20:28:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:51.325 00:15:51.325 real 0m9.596s 00:15:51.325 user 0m16.305s 00:15:51.325 sys 0m1.958s 00:15:51.325 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.325 20:28:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.325 ************************************ 00:15:51.325 END TEST raid5f_state_function_test 00:15:51.325 ************************************ 00:15:51.584 20:28:44 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:51.584 20:28:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:51.584 20:28:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.584 20:28:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.584 ************************************ 00:15:51.584 START TEST raid5f_state_function_test_sb 00:15:51.584 ************************************ 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94489 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:51.584 Process raid pid: 94489 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94489' 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94489 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94489 ']' 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.584 20:28:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.584 [2024-11-26 20:28:44.988355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:51.584 [2024-11-26 20:28:44.988471] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.584 [2024-11-26 20:28:45.129588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.843 [2024-11-26 20:28:45.202232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.843 [2024-11-26 20:28:45.272605] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.843 [2024-11-26 20:28:45.272658] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.411 [2024-11-26 20:28:45.837224] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.411 [2024-11-26 20:28:45.837275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.411 [2024-11-26 20:28:45.837290] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.411 [2024-11-26 20:28:45.837301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.411 [2024-11-26 20:28:45.837309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.411 [2024-11-26 20:28:45.837321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.411 [2024-11-26 20:28:45.837328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.411 [2024-11-26 20:28:45.837338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.411 "name": "Existed_Raid", 00:15:52.411 "uuid": "a21903f7-6739-4c30-806d-7f1a4bbd8dd4", 00:15:52.411 "strip_size_kb": 64, 00:15:52.411 "state": "configuring", 00:15:52.411 "raid_level": "raid5f", 00:15:52.411 "superblock": true, 00:15:52.411 "num_base_bdevs": 4, 00:15:52.411 "num_base_bdevs_discovered": 0, 00:15:52.411 "num_base_bdevs_operational": 4, 00:15:52.411 "base_bdevs_list": [ 00:15:52.411 { 00:15:52.411 "name": "BaseBdev1", 00:15:52.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.411 "is_configured": false, 00:15:52.411 "data_offset": 0, 00:15:52.411 "data_size": 0 00:15:52.411 }, 00:15:52.411 { 00:15:52.411 "name": "BaseBdev2", 00:15:52.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.411 "is_configured": false, 00:15:52.411 "data_offset": 0, 00:15:52.411 "data_size": 0 00:15:52.411 }, 00:15:52.411 { 00:15:52.411 "name": "BaseBdev3", 00:15:52.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.411 "is_configured": false, 00:15:52.411 "data_offset": 0, 00:15:52.411 "data_size": 0 00:15:52.411 }, 00:15:52.411 { 00:15:52.411 "name": "BaseBdev4", 00:15:52.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.411 "is_configured": false, 00:15:52.411 "data_offset": 0, 00:15:52.411 "data_size": 0 00:15:52.411 } 00:15:52.411 ] 00:15:52.411 }' 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.411 20:28:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 [2024-11-26 20:28:46.284388] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.979 [2024-11-26 20:28:46.284435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 [2024-11-26 20:28:46.292417] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.979 [2024-11-26 20:28:46.292456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.979 [2024-11-26 20:28:46.292465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.979 [2024-11-26 20:28:46.292474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.979 [2024-11-26 20:28:46.292480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.979 [2024-11-26 20:28:46.292489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.979 [2024-11-26 20:28:46.292496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.979 [2024-11-26 20:28:46.292504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 [2024-11-26 20:28:46.311700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.979 BaseBdev1 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.979 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.979 [ 00:15:52.979 { 00:15:52.979 "name": "BaseBdev1", 00:15:52.979 "aliases": [ 00:15:52.979 "b2ad0782-8153-4a34-83f8-87fd9819e63f" 00:15:52.979 ], 00:15:52.979 "product_name": "Malloc disk", 00:15:52.979 "block_size": 512, 00:15:52.979 "num_blocks": 65536, 00:15:52.979 "uuid": "b2ad0782-8153-4a34-83f8-87fd9819e63f", 00:15:52.979 "assigned_rate_limits": { 00:15:52.979 "rw_ios_per_sec": 0, 00:15:52.979 "rw_mbytes_per_sec": 0, 00:15:52.979 "r_mbytes_per_sec": 0, 00:15:52.979 "w_mbytes_per_sec": 0 00:15:52.979 }, 00:15:52.979 "claimed": true, 00:15:52.979 "claim_type": "exclusive_write", 00:15:52.979 "zoned": false, 00:15:52.979 "supported_io_types": { 00:15:52.979 "read": true, 00:15:52.979 "write": true, 00:15:52.979 "unmap": true, 00:15:52.979 "flush": true, 00:15:52.979 "reset": true, 00:15:52.979 "nvme_admin": false, 00:15:52.979 "nvme_io": false, 00:15:52.979 "nvme_io_md": false, 00:15:52.979 "write_zeroes": true, 00:15:52.979 "zcopy": true, 00:15:52.980 "get_zone_info": false, 00:15:52.980 "zone_management": false, 00:15:52.980 "zone_append": false, 00:15:52.980 "compare": false, 00:15:52.980 "compare_and_write": false, 00:15:52.980 "abort": true, 00:15:52.980 "seek_hole": false, 00:15:52.980 "seek_data": false, 00:15:52.980 "copy": true, 00:15:52.980 "nvme_iov_md": false 00:15:52.980 }, 00:15:52.980 "memory_domains": [ 00:15:52.980 { 00:15:52.980 "dma_device_id": "system", 00:15:52.980 "dma_device_type": 1 00:15:52.980 }, 00:15:52.980 { 00:15:52.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.980 "dma_device_type": 2 00:15:52.980 } 00:15:52.980 ], 00:15:52.980 "driver_specific": {} 00:15:52.980 } 00:15:52.980 ] 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.980 "name": "Existed_Raid", 00:15:52.980 "uuid": "a18a352e-2ae2-4f09-af70-73bc2c785722", 00:15:52.980 "strip_size_kb": 64, 00:15:52.980 "state": "configuring", 00:15:52.980 "raid_level": "raid5f", 00:15:52.980 "superblock": true, 00:15:52.980 "num_base_bdevs": 4, 00:15:52.980 "num_base_bdevs_discovered": 1, 00:15:52.980 "num_base_bdevs_operational": 4, 00:15:52.980 "base_bdevs_list": [ 00:15:52.980 { 00:15:52.980 "name": "BaseBdev1", 00:15:52.980 "uuid": "b2ad0782-8153-4a34-83f8-87fd9819e63f", 00:15:52.980 "is_configured": true, 00:15:52.980 "data_offset": 2048, 00:15:52.980 "data_size": 63488 00:15:52.980 }, 00:15:52.980 { 00:15:52.980 "name": "BaseBdev2", 00:15:52.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.980 "is_configured": false, 00:15:52.980 "data_offset": 0, 00:15:52.980 "data_size": 0 00:15:52.980 }, 00:15:52.980 { 00:15:52.980 "name": "BaseBdev3", 00:15:52.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.980 "is_configured": false, 00:15:52.980 "data_offset": 0, 00:15:52.980 "data_size": 0 00:15:52.980 }, 00:15:52.980 { 00:15:52.980 "name": "BaseBdev4", 00:15:52.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.980 "is_configured": false, 00:15:52.980 "data_offset": 0, 00:15:52.980 "data_size": 0 00:15:52.980 } 00:15:52.980 ] 00:15:52.980 }' 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.980 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.239 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:53.239 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.239 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.498 [2024-11-26 20:28:46.790919] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.498 [2024-11-26 20:28:46.790980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.498 [2024-11-26 20:28:46.802938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.498 [2024-11-26 20:28:46.804826] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.498 [2024-11-26 20:28:46.804865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.498 [2024-11-26 20:28:46.804874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:53.498 [2024-11-26 20:28:46.804883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:53.498 [2024-11-26 20:28:46.804889] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:53.498 [2024-11-26 20:28:46.804897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.498 "name": "Existed_Raid", 00:15:53.498 "uuid": "9986565b-6b4a-4f7c-b44c-3956b4ad318d", 00:15:53.498 "strip_size_kb": 64, 00:15:53.498 "state": "configuring", 00:15:53.498 "raid_level": "raid5f", 00:15:53.498 "superblock": true, 00:15:53.498 "num_base_bdevs": 4, 00:15:53.498 "num_base_bdevs_discovered": 1, 00:15:53.498 "num_base_bdevs_operational": 4, 00:15:53.498 "base_bdevs_list": [ 00:15:53.498 { 00:15:53.498 "name": "BaseBdev1", 00:15:53.498 "uuid": "b2ad0782-8153-4a34-83f8-87fd9819e63f", 00:15:53.498 "is_configured": true, 00:15:53.498 "data_offset": 2048, 00:15:53.498 "data_size": 63488 00:15:53.498 }, 00:15:53.498 { 00:15:53.498 "name": "BaseBdev2", 00:15:53.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.498 "is_configured": false, 00:15:53.498 "data_offset": 0, 00:15:53.498 "data_size": 0 00:15:53.498 }, 00:15:53.498 { 00:15:53.498 "name": "BaseBdev3", 00:15:53.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.498 "is_configured": false, 00:15:53.498 "data_offset": 0, 00:15:53.498 "data_size": 0 00:15:53.498 }, 00:15:53.498 { 00:15:53.498 "name": "BaseBdev4", 00:15:53.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.498 "is_configured": false, 00:15:53.498 "data_offset": 0, 00:15:53.498 "data_size": 0 00:15:53.498 } 00:15:53.498 ] 00:15:53.498 }' 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.498 20:28:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.758 [2024-11-26 20:28:47.267305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.758 BaseBdev2 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.758 [ 00:15:53.758 { 00:15:53.758 "name": "BaseBdev2", 00:15:53.758 "aliases": [ 00:15:53.758 "d187ee3b-ce79-4e74-b83b-84f00e6473a7" 00:15:53.758 ], 00:15:53.758 "product_name": "Malloc disk", 00:15:53.758 "block_size": 512, 00:15:53.758 "num_blocks": 65536, 00:15:53.758 "uuid": "d187ee3b-ce79-4e74-b83b-84f00e6473a7", 00:15:53.758 "assigned_rate_limits": { 00:15:53.758 "rw_ios_per_sec": 0, 00:15:53.758 "rw_mbytes_per_sec": 0, 00:15:53.758 "r_mbytes_per_sec": 0, 00:15:53.758 "w_mbytes_per_sec": 0 00:15:53.758 }, 00:15:53.758 "claimed": true, 00:15:53.758 "claim_type": "exclusive_write", 00:15:53.758 "zoned": false, 00:15:53.758 "supported_io_types": { 00:15:53.758 "read": true, 00:15:53.758 "write": true, 00:15:53.758 "unmap": true, 00:15:53.758 "flush": true, 00:15:53.758 "reset": true, 00:15:53.758 "nvme_admin": false, 00:15:53.758 "nvme_io": false, 00:15:53.758 "nvme_io_md": false, 00:15:53.758 "write_zeroes": true, 00:15:53.758 "zcopy": true, 00:15:53.758 "get_zone_info": false, 00:15:53.758 "zone_management": false, 00:15:53.758 "zone_append": false, 00:15:53.758 "compare": false, 00:15:53.758 "compare_and_write": false, 00:15:53.758 "abort": true, 00:15:53.758 "seek_hole": false, 00:15:53.758 "seek_data": false, 00:15:53.758 "copy": true, 00:15:53.758 "nvme_iov_md": false 00:15:53.758 }, 00:15:53.758 "memory_domains": [ 00:15:53.758 { 00:15:53.758 "dma_device_id": "system", 00:15:53.758 "dma_device_type": 1 00:15:53.758 }, 00:15:53.758 { 00:15:53.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.758 "dma_device_type": 2 00:15:53.758 } 00:15:53.758 ], 00:15:53.758 "driver_specific": {} 00:15:53.758 } 00:15:53.758 ] 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.758 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.018 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.018 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.018 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.018 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.018 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.018 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.018 "name": "Existed_Raid", 00:15:54.018 "uuid": "9986565b-6b4a-4f7c-b44c-3956b4ad318d", 00:15:54.018 "strip_size_kb": 64, 00:15:54.018 "state": "configuring", 00:15:54.018 "raid_level": "raid5f", 00:15:54.018 "superblock": true, 00:15:54.018 "num_base_bdevs": 4, 00:15:54.018 "num_base_bdevs_discovered": 2, 00:15:54.018 "num_base_bdevs_operational": 4, 00:15:54.018 "base_bdevs_list": [ 00:15:54.018 { 00:15:54.018 "name": "BaseBdev1", 00:15:54.018 "uuid": "b2ad0782-8153-4a34-83f8-87fd9819e63f", 00:15:54.018 "is_configured": true, 00:15:54.018 "data_offset": 2048, 00:15:54.018 "data_size": 63488 00:15:54.018 }, 00:15:54.018 { 00:15:54.018 "name": "BaseBdev2", 00:15:54.018 "uuid": "d187ee3b-ce79-4e74-b83b-84f00e6473a7", 00:15:54.018 "is_configured": true, 00:15:54.018 "data_offset": 2048, 00:15:54.018 "data_size": 63488 00:15:54.018 }, 00:15:54.018 { 00:15:54.018 "name": "BaseBdev3", 00:15:54.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.018 "is_configured": false, 00:15:54.018 "data_offset": 0, 00:15:54.018 "data_size": 0 00:15:54.018 }, 00:15:54.018 { 00:15:54.018 "name": "BaseBdev4", 00:15:54.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.018 "is_configured": false, 00:15:54.018 "data_offset": 0, 00:15:54.018 "data_size": 0 00:15:54.018 } 00:15:54.018 ] 00:15:54.018 }' 00:15:54.018 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.018 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.277 [2024-11-26 20:28:47.731162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.277 BaseBdev3 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.277 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.277 [ 00:15:54.277 { 00:15:54.277 "name": "BaseBdev3", 00:15:54.277 "aliases": [ 00:15:54.277 "45a576a3-45b5-4def-81da-118f6d617b60" 00:15:54.277 ], 00:15:54.277 "product_name": "Malloc disk", 00:15:54.277 "block_size": 512, 00:15:54.277 "num_blocks": 65536, 00:15:54.277 "uuid": "45a576a3-45b5-4def-81da-118f6d617b60", 00:15:54.277 "assigned_rate_limits": { 00:15:54.277 "rw_ios_per_sec": 0, 00:15:54.277 "rw_mbytes_per_sec": 0, 00:15:54.277 "r_mbytes_per_sec": 0, 00:15:54.277 "w_mbytes_per_sec": 0 00:15:54.277 }, 00:15:54.277 "claimed": true, 00:15:54.277 "claim_type": "exclusive_write", 00:15:54.277 "zoned": false, 00:15:54.277 "supported_io_types": { 00:15:54.277 "read": true, 00:15:54.277 "write": true, 00:15:54.277 "unmap": true, 00:15:54.277 "flush": true, 00:15:54.277 "reset": true, 00:15:54.277 "nvme_admin": false, 00:15:54.277 "nvme_io": false, 00:15:54.277 "nvme_io_md": false, 00:15:54.277 "write_zeroes": true, 00:15:54.277 "zcopy": true, 00:15:54.277 "get_zone_info": false, 00:15:54.277 "zone_management": false, 00:15:54.277 "zone_append": false, 00:15:54.277 "compare": false, 00:15:54.278 "compare_and_write": false, 00:15:54.278 "abort": true, 00:15:54.278 "seek_hole": false, 00:15:54.278 "seek_data": false, 00:15:54.278 "copy": true, 00:15:54.278 "nvme_iov_md": false 00:15:54.278 }, 00:15:54.278 "memory_domains": [ 00:15:54.278 { 00:15:54.278 "dma_device_id": "system", 00:15:54.278 "dma_device_type": 1 00:15:54.278 }, 00:15:54.278 { 00:15:54.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.278 "dma_device_type": 2 00:15:54.278 } 00:15:54.278 ], 00:15:54.278 "driver_specific": {} 00:15:54.278 } 00:15:54.278 ] 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.278 "name": "Existed_Raid", 00:15:54.278 "uuid": "9986565b-6b4a-4f7c-b44c-3956b4ad318d", 00:15:54.278 "strip_size_kb": 64, 00:15:54.278 "state": "configuring", 00:15:54.278 "raid_level": "raid5f", 00:15:54.278 "superblock": true, 00:15:54.278 "num_base_bdevs": 4, 00:15:54.278 "num_base_bdevs_discovered": 3, 00:15:54.278 "num_base_bdevs_operational": 4, 00:15:54.278 "base_bdevs_list": [ 00:15:54.278 { 00:15:54.278 "name": "BaseBdev1", 00:15:54.278 "uuid": "b2ad0782-8153-4a34-83f8-87fd9819e63f", 00:15:54.278 "is_configured": true, 00:15:54.278 "data_offset": 2048, 00:15:54.278 "data_size": 63488 00:15:54.278 }, 00:15:54.278 { 00:15:54.278 "name": "BaseBdev2", 00:15:54.278 "uuid": "d187ee3b-ce79-4e74-b83b-84f00e6473a7", 00:15:54.278 "is_configured": true, 00:15:54.278 "data_offset": 2048, 00:15:54.278 "data_size": 63488 00:15:54.278 }, 00:15:54.278 { 00:15:54.278 "name": "BaseBdev3", 00:15:54.278 "uuid": "45a576a3-45b5-4def-81da-118f6d617b60", 00:15:54.278 "is_configured": true, 00:15:54.278 "data_offset": 2048, 00:15:54.278 "data_size": 63488 00:15:54.278 }, 00:15:54.278 { 00:15:54.278 "name": "BaseBdev4", 00:15:54.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.278 "is_configured": false, 00:15:54.278 "data_offset": 0, 00:15:54.278 "data_size": 0 00:15:54.278 } 00:15:54.278 ] 00:15:54.278 }' 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.278 20:28:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.846 [2024-11-26 20:28:48.166217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:54.846 [2024-11-26 20:28:48.166447] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:54.846 [2024-11-26 20:28:48.166464] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:54.846 [2024-11-26 20:28:48.166754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:54.846 BaseBdev4 00:15:54.846 [2024-11-26 20:28:48.167230] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:54.846 [2024-11-26 20:28:48.167253] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:54.846 [2024-11-26 20:28:48.167382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.846 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.846 [ 00:15:54.846 { 00:15:54.846 "name": "BaseBdev4", 00:15:54.846 "aliases": [ 00:15:54.846 "35c88158-2b2c-420a-8d57-269541ff22bd" 00:15:54.846 ], 00:15:54.846 "product_name": "Malloc disk", 00:15:54.846 "block_size": 512, 00:15:54.846 "num_blocks": 65536, 00:15:54.846 "uuid": "35c88158-2b2c-420a-8d57-269541ff22bd", 00:15:54.846 "assigned_rate_limits": { 00:15:54.846 "rw_ios_per_sec": 0, 00:15:54.846 "rw_mbytes_per_sec": 0, 00:15:54.846 "r_mbytes_per_sec": 0, 00:15:54.846 "w_mbytes_per_sec": 0 00:15:54.846 }, 00:15:54.846 "claimed": true, 00:15:54.846 "claim_type": "exclusive_write", 00:15:54.846 "zoned": false, 00:15:54.846 "supported_io_types": { 00:15:54.846 "read": true, 00:15:54.846 "write": true, 00:15:54.846 "unmap": true, 00:15:54.846 "flush": true, 00:15:54.846 "reset": true, 00:15:54.846 "nvme_admin": false, 00:15:54.846 "nvme_io": false, 00:15:54.846 "nvme_io_md": false, 00:15:54.846 "write_zeroes": true, 00:15:54.846 "zcopy": true, 00:15:54.846 "get_zone_info": false, 00:15:54.846 "zone_management": false, 00:15:54.846 "zone_append": false, 00:15:54.846 "compare": false, 00:15:54.847 "compare_and_write": false, 00:15:54.847 "abort": true, 00:15:54.847 "seek_hole": false, 00:15:54.847 "seek_data": false, 00:15:54.847 "copy": true, 00:15:54.847 "nvme_iov_md": false 00:15:54.847 }, 00:15:54.847 "memory_domains": [ 00:15:54.847 { 00:15:54.847 "dma_device_id": "system", 00:15:54.847 "dma_device_type": 1 00:15:54.847 }, 00:15:54.847 { 00:15:54.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.847 "dma_device_type": 2 00:15:54.847 } 00:15:54.847 ], 00:15:54.847 "driver_specific": {} 00:15:54.847 } 00:15:54.847 ] 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.847 "name": "Existed_Raid", 00:15:54.847 "uuid": "9986565b-6b4a-4f7c-b44c-3956b4ad318d", 00:15:54.847 "strip_size_kb": 64, 00:15:54.847 "state": "online", 00:15:54.847 "raid_level": "raid5f", 00:15:54.847 "superblock": true, 00:15:54.847 "num_base_bdevs": 4, 00:15:54.847 "num_base_bdevs_discovered": 4, 00:15:54.847 "num_base_bdevs_operational": 4, 00:15:54.847 "base_bdevs_list": [ 00:15:54.847 { 00:15:54.847 "name": "BaseBdev1", 00:15:54.847 "uuid": "b2ad0782-8153-4a34-83f8-87fd9819e63f", 00:15:54.847 "is_configured": true, 00:15:54.847 "data_offset": 2048, 00:15:54.847 "data_size": 63488 00:15:54.847 }, 00:15:54.847 { 00:15:54.847 "name": "BaseBdev2", 00:15:54.847 "uuid": "d187ee3b-ce79-4e74-b83b-84f00e6473a7", 00:15:54.847 "is_configured": true, 00:15:54.847 "data_offset": 2048, 00:15:54.847 "data_size": 63488 00:15:54.847 }, 00:15:54.847 { 00:15:54.847 "name": "BaseBdev3", 00:15:54.847 "uuid": "45a576a3-45b5-4def-81da-118f6d617b60", 00:15:54.847 "is_configured": true, 00:15:54.847 "data_offset": 2048, 00:15:54.847 "data_size": 63488 00:15:54.847 }, 00:15:54.847 { 00:15:54.847 "name": "BaseBdev4", 00:15:54.847 "uuid": "35c88158-2b2c-420a-8d57-269541ff22bd", 00:15:54.847 "is_configured": true, 00:15:54.847 "data_offset": 2048, 00:15:54.847 "data_size": 63488 00:15:54.847 } 00:15:54.847 ] 00:15:54.847 }' 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.847 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.107 [2024-11-26 20:28:48.629805] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.107 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.367 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.367 "name": "Existed_Raid", 00:15:55.367 "aliases": [ 00:15:55.367 "9986565b-6b4a-4f7c-b44c-3956b4ad318d" 00:15:55.367 ], 00:15:55.367 "product_name": "Raid Volume", 00:15:55.367 "block_size": 512, 00:15:55.367 "num_blocks": 190464, 00:15:55.367 "uuid": "9986565b-6b4a-4f7c-b44c-3956b4ad318d", 00:15:55.367 "assigned_rate_limits": { 00:15:55.367 "rw_ios_per_sec": 0, 00:15:55.367 "rw_mbytes_per_sec": 0, 00:15:55.367 "r_mbytes_per_sec": 0, 00:15:55.367 "w_mbytes_per_sec": 0 00:15:55.367 }, 00:15:55.367 "claimed": false, 00:15:55.367 "zoned": false, 00:15:55.367 "supported_io_types": { 00:15:55.367 "read": true, 00:15:55.367 "write": true, 00:15:55.367 "unmap": false, 00:15:55.367 "flush": false, 00:15:55.367 "reset": true, 00:15:55.367 "nvme_admin": false, 00:15:55.367 "nvme_io": false, 00:15:55.367 "nvme_io_md": false, 00:15:55.367 "write_zeroes": true, 00:15:55.367 "zcopy": false, 00:15:55.367 "get_zone_info": false, 00:15:55.367 "zone_management": false, 00:15:55.367 "zone_append": false, 00:15:55.367 "compare": false, 00:15:55.367 "compare_and_write": false, 00:15:55.367 "abort": false, 00:15:55.367 "seek_hole": false, 00:15:55.367 "seek_data": false, 00:15:55.367 "copy": false, 00:15:55.367 "nvme_iov_md": false 00:15:55.367 }, 00:15:55.367 "driver_specific": { 00:15:55.367 "raid": { 00:15:55.367 "uuid": "9986565b-6b4a-4f7c-b44c-3956b4ad318d", 00:15:55.367 "strip_size_kb": 64, 00:15:55.367 "state": "online", 00:15:55.367 "raid_level": "raid5f", 00:15:55.367 "superblock": true, 00:15:55.367 "num_base_bdevs": 4, 00:15:55.367 "num_base_bdevs_discovered": 4, 00:15:55.367 "num_base_bdevs_operational": 4, 00:15:55.367 "base_bdevs_list": [ 00:15:55.367 { 00:15:55.367 "name": "BaseBdev1", 00:15:55.367 "uuid": "b2ad0782-8153-4a34-83f8-87fd9819e63f", 00:15:55.367 "is_configured": true, 00:15:55.367 "data_offset": 2048, 00:15:55.367 "data_size": 63488 00:15:55.367 }, 00:15:55.367 { 00:15:55.367 "name": "BaseBdev2", 00:15:55.367 "uuid": "d187ee3b-ce79-4e74-b83b-84f00e6473a7", 00:15:55.367 "is_configured": true, 00:15:55.367 "data_offset": 2048, 00:15:55.367 "data_size": 63488 00:15:55.367 }, 00:15:55.367 { 00:15:55.367 "name": "BaseBdev3", 00:15:55.367 "uuid": "45a576a3-45b5-4def-81da-118f6d617b60", 00:15:55.367 "is_configured": true, 00:15:55.367 "data_offset": 2048, 00:15:55.367 "data_size": 63488 00:15:55.367 }, 00:15:55.367 { 00:15:55.367 "name": "BaseBdev4", 00:15:55.367 "uuid": "35c88158-2b2c-420a-8d57-269541ff22bd", 00:15:55.367 "is_configured": true, 00:15:55.367 "data_offset": 2048, 00:15:55.367 "data_size": 63488 00:15:55.367 } 00:15:55.368 ] 00:15:55.368 } 00:15:55.368 } 00:15:55.368 }' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:55.368 BaseBdev2 00:15:55.368 BaseBdev3 00:15:55.368 BaseBdev4' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.368 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.627 [2024-11-26 20:28:48.937142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.627 "name": "Existed_Raid", 00:15:55.627 "uuid": "9986565b-6b4a-4f7c-b44c-3956b4ad318d", 00:15:55.627 "strip_size_kb": 64, 00:15:55.627 "state": "online", 00:15:55.627 "raid_level": "raid5f", 00:15:55.627 "superblock": true, 00:15:55.627 "num_base_bdevs": 4, 00:15:55.627 "num_base_bdevs_discovered": 3, 00:15:55.627 "num_base_bdevs_operational": 3, 00:15:55.627 "base_bdevs_list": [ 00:15:55.627 { 00:15:55.627 "name": null, 00:15:55.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.627 "is_configured": false, 00:15:55.627 "data_offset": 0, 00:15:55.627 "data_size": 63488 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "name": "BaseBdev2", 00:15:55.627 "uuid": "d187ee3b-ce79-4e74-b83b-84f00e6473a7", 00:15:55.627 "is_configured": true, 00:15:55.627 "data_offset": 2048, 00:15:55.627 "data_size": 63488 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "name": "BaseBdev3", 00:15:55.627 "uuid": "45a576a3-45b5-4def-81da-118f6d617b60", 00:15:55.627 "is_configured": true, 00:15:55.627 "data_offset": 2048, 00:15:55.627 "data_size": 63488 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "name": "BaseBdev4", 00:15:55.627 "uuid": "35c88158-2b2c-420a-8d57-269541ff22bd", 00:15:55.627 "is_configured": true, 00:15:55.627 "data_offset": 2048, 00:15:55.627 "data_size": 63488 00:15:55.627 } 00:15:55.627 ] 00:15:55.627 }' 00:15:55.627 20:28:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.627 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.886 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:55.886 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:55.886 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:55.886 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.886 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.886 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.886 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.145 [2024-11-26 20:28:49.447461] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:56.145 [2024-11-26 20:28:49.447632] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.145 [2024-11-26 20:28:49.467790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.145 [2024-11-26 20:28:49.523759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.145 [2024-11-26 20:28:49.604795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:56.145 [2024-11-26 20:28:49.604850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.145 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.146 BaseBdev2 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.146 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.406 [ 00:15:56.406 { 00:15:56.406 "name": "BaseBdev2", 00:15:56.406 "aliases": [ 00:15:56.406 "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e" 00:15:56.406 ], 00:15:56.406 "product_name": "Malloc disk", 00:15:56.406 "block_size": 512, 00:15:56.406 "num_blocks": 65536, 00:15:56.406 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:15:56.406 "assigned_rate_limits": { 00:15:56.406 "rw_ios_per_sec": 0, 00:15:56.406 "rw_mbytes_per_sec": 0, 00:15:56.406 "r_mbytes_per_sec": 0, 00:15:56.406 "w_mbytes_per_sec": 0 00:15:56.406 }, 00:15:56.406 "claimed": false, 00:15:56.406 "zoned": false, 00:15:56.406 "supported_io_types": { 00:15:56.406 "read": true, 00:15:56.406 "write": true, 00:15:56.406 "unmap": true, 00:15:56.406 "flush": true, 00:15:56.406 "reset": true, 00:15:56.406 "nvme_admin": false, 00:15:56.406 "nvme_io": false, 00:15:56.406 "nvme_io_md": false, 00:15:56.406 "write_zeroes": true, 00:15:56.406 "zcopy": true, 00:15:56.406 "get_zone_info": false, 00:15:56.406 "zone_management": false, 00:15:56.406 "zone_append": false, 00:15:56.406 "compare": false, 00:15:56.406 "compare_and_write": false, 00:15:56.406 "abort": true, 00:15:56.406 "seek_hole": false, 00:15:56.406 "seek_data": false, 00:15:56.406 "copy": true, 00:15:56.406 "nvme_iov_md": false 00:15:56.406 }, 00:15:56.406 "memory_domains": [ 00:15:56.406 { 00:15:56.406 "dma_device_id": "system", 00:15:56.406 "dma_device_type": 1 00:15:56.406 }, 00:15:56.406 { 00:15:56.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.406 "dma_device_type": 2 00:15:56.406 } 00:15:56.406 ], 00:15:56.406 "driver_specific": {} 00:15:56.406 } 00:15:56.406 ] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.406 BaseBdev3 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.406 [ 00:15:56.406 { 00:15:56.406 "name": "BaseBdev3", 00:15:56.406 "aliases": [ 00:15:56.406 "112ed016-c73b-446e-bf97-7e54b324b8b5" 00:15:56.406 ], 00:15:56.406 "product_name": "Malloc disk", 00:15:56.406 "block_size": 512, 00:15:56.406 "num_blocks": 65536, 00:15:56.406 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:15:56.406 "assigned_rate_limits": { 00:15:56.406 "rw_ios_per_sec": 0, 00:15:56.406 "rw_mbytes_per_sec": 0, 00:15:56.406 "r_mbytes_per_sec": 0, 00:15:56.406 "w_mbytes_per_sec": 0 00:15:56.406 }, 00:15:56.406 "claimed": false, 00:15:56.406 "zoned": false, 00:15:56.406 "supported_io_types": { 00:15:56.406 "read": true, 00:15:56.406 "write": true, 00:15:56.406 "unmap": true, 00:15:56.406 "flush": true, 00:15:56.406 "reset": true, 00:15:56.406 "nvme_admin": false, 00:15:56.406 "nvme_io": false, 00:15:56.406 "nvme_io_md": false, 00:15:56.406 "write_zeroes": true, 00:15:56.406 "zcopy": true, 00:15:56.406 "get_zone_info": false, 00:15:56.406 "zone_management": false, 00:15:56.406 "zone_append": false, 00:15:56.406 "compare": false, 00:15:56.406 "compare_and_write": false, 00:15:56.406 "abort": true, 00:15:56.406 "seek_hole": false, 00:15:56.406 "seek_data": false, 00:15:56.406 "copy": true, 00:15:56.406 "nvme_iov_md": false 00:15:56.406 }, 00:15:56.406 "memory_domains": [ 00:15:56.406 { 00:15:56.406 "dma_device_id": "system", 00:15:56.406 "dma_device_type": 1 00:15:56.406 }, 00:15:56.406 { 00:15:56.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.406 "dma_device_type": 2 00:15:56.406 } 00:15:56.406 ], 00:15:56.406 "driver_specific": {} 00:15:56.406 } 00:15:56.406 ] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.406 BaseBdev4 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.406 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.406 [ 00:15:56.406 { 00:15:56.406 "name": "BaseBdev4", 00:15:56.406 "aliases": [ 00:15:56.406 "a30a2e31-d5e8-4287-addd-94ab9efa4d02" 00:15:56.406 ], 00:15:56.406 "product_name": "Malloc disk", 00:15:56.406 "block_size": 512, 00:15:56.406 "num_blocks": 65536, 00:15:56.406 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:15:56.406 "assigned_rate_limits": { 00:15:56.406 "rw_ios_per_sec": 0, 00:15:56.406 "rw_mbytes_per_sec": 0, 00:15:56.406 "r_mbytes_per_sec": 0, 00:15:56.407 "w_mbytes_per_sec": 0 00:15:56.407 }, 00:15:56.407 "claimed": false, 00:15:56.407 "zoned": false, 00:15:56.407 "supported_io_types": { 00:15:56.407 "read": true, 00:15:56.407 "write": true, 00:15:56.407 "unmap": true, 00:15:56.407 "flush": true, 00:15:56.407 "reset": true, 00:15:56.407 "nvme_admin": false, 00:15:56.407 "nvme_io": false, 00:15:56.407 "nvme_io_md": false, 00:15:56.407 "write_zeroes": true, 00:15:56.407 "zcopy": true, 00:15:56.407 "get_zone_info": false, 00:15:56.407 "zone_management": false, 00:15:56.407 "zone_append": false, 00:15:56.407 "compare": false, 00:15:56.407 "compare_and_write": false, 00:15:56.407 "abort": true, 00:15:56.407 "seek_hole": false, 00:15:56.407 "seek_data": false, 00:15:56.407 "copy": true, 00:15:56.407 "nvme_iov_md": false 00:15:56.407 }, 00:15:56.407 "memory_domains": [ 00:15:56.407 { 00:15:56.407 "dma_device_id": "system", 00:15:56.407 "dma_device_type": 1 00:15:56.407 }, 00:15:56.407 { 00:15:56.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.407 "dma_device_type": 2 00:15:56.407 } 00:15:56.407 ], 00:15:56.407 "driver_specific": {} 00:15:56.407 } 00:15:56.407 ] 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.407 [2024-11-26 20:28:49.835583] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.407 [2024-11-26 20:28:49.835635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.407 [2024-11-26 20:28:49.835657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.407 [2024-11-26 20:28:49.837530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.407 [2024-11-26 20:28:49.837585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.407 "name": "Existed_Raid", 00:15:56.407 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:15:56.407 "strip_size_kb": 64, 00:15:56.407 "state": "configuring", 00:15:56.407 "raid_level": "raid5f", 00:15:56.407 "superblock": true, 00:15:56.407 "num_base_bdevs": 4, 00:15:56.407 "num_base_bdevs_discovered": 3, 00:15:56.407 "num_base_bdevs_operational": 4, 00:15:56.407 "base_bdevs_list": [ 00:15:56.407 { 00:15:56.407 "name": "BaseBdev1", 00:15:56.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.407 "is_configured": false, 00:15:56.407 "data_offset": 0, 00:15:56.407 "data_size": 0 00:15:56.407 }, 00:15:56.407 { 00:15:56.407 "name": "BaseBdev2", 00:15:56.407 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:15:56.407 "is_configured": true, 00:15:56.407 "data_offset": 2048, 00:15:56.407 "data_size": 63488 00:15:56.407 }, 00:15:56.407 { 00:15:56.407 "name": "BaseBdev3", 00:15:56.407 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:15:56.407 "is_configured": true, 00:15:56.407 "data_offset": 2048, 00:15:56.407 "data_size": 63488 00:15:56.407 }, 00:15:56.407 { 00:15:56.407 "name": "BaseBdev4", 00:15:56.407 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:15:56.407 "is_configured": true, 00:15:56.407 "data_offset": 2048, 00:15:56.407 "data_size": 63488 00:15:56.407 } 00:15:56.407 ] 00:15:56.407 }' 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.407 20:28:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.700 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:56.700 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.700 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.979 [2024-11-26 20:28:50.238900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.979 "name": "Existed_Raid", 00:15:56.979 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:15:56.979 "strip_size_kb": 64, 00:15:56.979 "state": "configuring", 00:15:56.979 "raid_level": "raid5f", 00:15:56.979 "superblock": true, 00:15:56.979 "num_base_bdevs": 4, 00:15:56.979 "num_base_bdevs_discovered": 2, 00:15:56.979 "num_base_bdevs_operational": 4, 00:15:56.979 "base_bdevs_list": [ 00:15:56.979 { 00:15:56.979 "name": "BaseBdev1", 00:15:56.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.979 "is_configured": false, 00:15:56.979 "data_offset": 0, 00:15:56.979 "data_size": 0 00:15:56.979 }, 00:15:56.979 { 00:15:56.979 "name": null, 00:15:56.979 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:15:56.979 "is_configured": false, 00:15:56.979 "data_offset": 0, 00:15:56.979 "data_size": 63488 00:15:56.979 }, 00:15:56.979 { 00:15:56.979 "name": "BaseBdev3", 00:15:56.979 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:15:56.979 "is_configured": true, 00:15:56.979 "data_offset": 2048, 00:15:56.979 "data_size": 63488 00:15:56.979 }, 00:15:56.979 { 00:15:56.979 "name": "BaseBdev4", 00:15:56.979 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:15:56.979 "is_configured": true, 00:15:56.979 "data_offset": 2048, 00:15:56.979 "data_size": 63488 00:15:56.979 } 00:15:56.979 ] 00:15:56.979 }' 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.979 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.239 [2024-11-26 20:28:50.683554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.239 BaseBdev1 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.239 [ 00:15:57.239 { 00:15:57.239 "name": "BaseBdev1", 00:15:57.239 "aliases": [ 00:15:57.239 "cdcf9067-a6ce-445c-8d51-ab15c28dda87" 00:15:57.239 ], 00:15:57.239 "product_name": "Malloc disk", 00:15:57.239 "block_size": 512, 00:15:57.239 "num_blocks": 65536, 00:15:57.239 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:15:57.239 "assigned_rate_limits": { 00:15:57.239 "rw_ios_per_sec": 0, 00:15:57.239 "rw_mbytes_per_sec": 0, 00:15:57.239 "r_mbytes_per_sec": 0, 00:15:57.239 "w_mbytes_per_sec": 0 00:15:57.239 }, 00:15:57.239 "claimed": true, 00:15:57.239 "claim_type": "exclusive_write", 00:15:57.239 "zoned": false, 00:15:57.239 "supported_io_types": { 00:15:57.239 "read": true, 00:15:57.239 "write": true, 00:15:57.239 "unmap": true, 00:15:57.239 "flush": true, 00:15:57.239 "reset": true, 00:15:57.239 "nvme_admin": false, 00:15:57.239 "nvme_io": false, 00:15:57.239 "nvme_io_md": false, 00:15:57.239 "write_zeroes": true, 00:15:57.239 "zcopy": true, 00:15:57.239 "get_zone_info": false, 00:15:57.239 "zone_management": false, 00:15:57.239 "zone_append": false, 00:15:57.239 "compare": false, 00:15:57.239 "compare_and_write": false, 00:15:57.239 "abort": true, 00:15:57.239 "seek_hole": false, 00:15:57.239 "seek_data": false, 00:15:57.239 "copy": true, 00:15:57.239 "nvme_iov_md": false 00:15:57.239 }, 00:15:57.239 "memory_domains": [ 00:15:57.239 { 00:15:57.239 "dma_device_id": "system", 00:15:57.239 "dma_device_type": 1 00:15:57.239 }, 00:15:57.239 { 00:15:57.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.239 "dma_device_type": 2 00:15:57.239 } 00:15:57.239 ], 00:15:57.239 "driver_specific": {} 00:15:57.239 } 00:15:57.239 ] 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.239 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.239 "name": "Existed_Raid", 00:15:57.239 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:15:57.239 "strip_size_kb": 64, 00:15:57.239 "state": "configuring", 00:15:57.239 "raid_level": "raid5f", 00:15:57.239 "superblock": true, 00:15:57.239 "num_base_bdevs": 4, 00:15:57.240 "num_base_bdevs_discovered": 3, 00:15:57.240 "num_base_bdevs_operational": 4, 00:15:57.240 "base_bdevs_list": [ 00:15:57.240 { 00:15:57.240 "name": "BaseBdev1", 00:15:57.240 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:15:57.240 "is_configured": true, 00:15:57.240 "data_offset": 2048, 00:15:57.240 "data_size": 63488 00:15:57.240 }, 00:15:57.240 { 00:15:57.240 "name": null, 00:15:57.240 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:15:57.240 "is_configured": false, 00:15:57.240 "data_offset": 0, 00:15:57.240 "data_size": 63488 00:15:57.240 }, 00:15:57.240 { 00:15:57.240 "name": "BaseBdev3", 00:15:57.240 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:15:57.240 "is_configured": true, 00:15:57.240 "data_offset": 2048, 00:15:57.240 "data_size": 63488 00:15:57.240 }, 00:15:57.240 { 00:15:57.240 "name": "BaseBdev4", 00:15:57.240 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:15:57.240 "is_configured": true, 00:15:57.240 "data_offset": 2048, 00:15:57.240 "data_size": 63488 00:15:57.240 } 00:15:57.240 ] 00:15:57.240 }' 00:15:57.240 20:28:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.240 20:28:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.808 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.808 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.808 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.809 [2024-11-26 20:28:51.154833] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.809 "name": "Existed_Raid", 00:15:57.809 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:15:57.809 "strip_size_kb": 64, 00:15:57.809 "state": "configuring", 00:15:57.809 "raid_level": "raid5f", 00:15:57.809 "superblock": true, 00:15:57.809 "num_base_bdevs": 4, 00:15:57.809 "num_base_bdevs_discovered": 2, 00:15:57.809 "num_base_bdevs_operational": 4, 00:15:57.809 "base_bdevs_list": [ 00:15:57.809 { 00:15:57.809 "name": "BaseBdev1", 00:15:57.809 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:15:57.809 "is_configured": true, 00:15:57.809 "data_offset": 2048, 00:15:57.809 "data_size": 63488 00:15:57.809 }, 00:15:57.809 { 00:15:57.809 "name": null, 00:15:57.809 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:15:57.809 "is_configured": false, 00:15:57.809 "data_offset": 0, 00:15:57.809 "data_size": 63488 00:15:57.809 }, 00:15:57.809 { 00:15:57.809 "name": null, 00:15:57.809 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:15:57.809 "is_configured": false, 00:15:57.809 "data_offset": 0, 00:15:57.809 "data_size": 63488 00:15:57.809 }, 00:15:57.809 { 00:15:57.809 "name": "BaseBdev4", 00:15:57.809 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:15:57.809 "is_configured": true, 00:15:57.809 "data_offset": 2048, 00:15:57.809 "data_size": 63488 00:15:57.809 } 00:15:57.809 ] 00:15:57.809 }' 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.809 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.378 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:58.378 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.378 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.379 [2024-11-26 20:28:51.805820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.379 "name": "Existed_Raid", 00:15:58.379 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:15:58.379 "strip_size_kb": 64, 00:15:58.379 "state": "configuring", 00:15:58.379 "raid_level": "raid5f", 00:15:58.379 "superblock": true, 00:15:58.379 "num_base_bdevs": 4, 00:15:58.379 "num_base_bdevs_discovered": 3, 00:15:58.379 "num_base_bdevs_operational": 4, 00:15:58.379 "base_bdevs_list": [ 00:15:58.379 { 00:15:58.379 "name": "BaseBdev1", 00:15:58.379 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:15:58.379 "is_configured": true, 00:15:58.379 "data_offset": 2048, 00:15:58.379 "data_size": 63488 00:15:58.379 }, 00:15:58.379 { 00:15:58.379 "name": null, 00:15:58.379 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:15:58.379 "is_configured": false, 00:15:58.379 "data_offset": 0, 00:15:58.379 "data_size": 63488 00:15:58.379 }, 00:15:58.379 { 00:15:58.379 "name": "BaseBdev3", 00:15:58.379 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:15:58.379 "is_configured": true, 00:15:58.379 "data_offset": 2048, 00:15:58.379 "data_size": 63488 00:15:58.379 }, 00:15:58.379 { 00:15:58.379 "name": "BaseBdev4", 00:15:58.379 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:15:58.379 "is_configured": true, 00:15:58.379 "data_offset": 2048, 00:15:58.379 "data_size": 63488 00:15:58.379 } 00:15:58.379 ] 00:15:58.379 }' 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.379 20:28:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.948 [2024-11-26 20:28:52.324964] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.948 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.948 "name": "Existed_Raid", 00:15:58.948 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:15:58.948 "strip_size_kb": 64, 00:15:58.948 "state": "configuring", 00:15:58.948 "raid_level": "raid5f", 00:15:58.948 "superblock": true, 00:15:58.948 "num_base_bdevs": 4, 00:15:58.949 "num_base_bdevs_discovered": 2, 00:15:58.949 "num_base_bdevs_operational": 4, 00:15:58.949 "base_bdevs_list": [ 00:15:58.949 { 00:15:58.949 "name": null, 00:15:58.949 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:15:58.949 "is_configured": false, 00:15:58.949 "data_offset": 0, 00:15:58.949 "data_size": 63488 00:15:58.949 }, 00:15:58.949 { 00:15:58.949 "name": null, 00:15:58.949 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:15:58.949 "is_configured": false, 00:15:58.949 "data_offset": 0, 00:15:58.949 "data_size": 63488 00:15:58.949 }, 00:15:58.949 { 00:15:58.949 "name": "BaseBdev3", 00:15:58.949 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:15:58.949 "is_configured": true, 00:15:58.949 "data_offset": 2048, 00:15:58.949 "data_size": 63488 00:15:58.949 }, 00:15:58.949 { 00:15:58.949 "name": "BaseBdev4", 00:15:58.949 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:15:58.949 "is_configured": true, 00:15:58.949 "data_offset": 2048, 00:15:58.949 "data_size": 63488 00:15:58.949 } 00:15:58.949 ] 00:15:58.949 }' 00:15:58.949 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.949 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.518 [2024-11-26 20:28:52.830853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.518 "name": "Existed_Raid", 00:15:59.518 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:15:59.518 "strip_size_kb": 64, 00:15:59.518 "state": "configuring", 00:15:59.518 "raid_level": "raid5f", 00:15:59.518 "superblock": true, 00:15:59.518 "num_base_bdevs": 4, 00:15:59.518 "num_base_bdevs_discovered": 3, 00:15:59.518 "num_base_bdevs_operational": 4, 00:15:59.518 "base_bdevs_list": [ 00:15:59.518 { 00:15:59.518 "name": null, 00:15:59.518 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:15:59.518 "is_configured": false, 00:15:59.518 "data_offset": 0, 00:15:59.518 "data_size": 63488 00:15:59.518 }, 00:15:59.518 { 00:15:59.518 "name": "BaseBdev2", 00:15:59.518 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:15:59.518 "is_configured": true, 00:15:59.518 "data_offset": 2048, 00:15:59.518 "data_size": 63488 00:15:59.518 }, 00:15:59.518 { 00:15:59.518 "name": "BaseBdev3", 00:15:59.518 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:15:59.518 "is_configured": true, 00:15:59.518 "data_offset": 2048, 00:15:59.518 "data_size": 63488 00:15:59.518 }, 00:15:59.518 { 00:15:59.518 "name": "BaseBdev4", 00:15:59.518 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:15:59.518 "is_configured": true, 00:15:59.518 "data_offset": 2048, 00:15:59.518 "data_size": 63488 00:15:59.518 } 00:15:59.518 ] 00:15:59.518 }' 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.518 20:28:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cdcf9067-a6ce-445c-8d51-ab15c28dda87 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.777 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.038 [2024-11-26 20:28:53.335812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:00.038 [2024-11-26 20:28:53.336013] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:00.038 [2024-11-26 20:28:53.336030] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:00.038 [2024-11-26 20:28:53.336285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:00.038 NewBaseBdev 00:16:00.038 [2024-11-26 20:28:53.336773] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:00.038 [2024-11-26 20:28:53.336795] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:16:00.038 [2024-11-26 20:28:53.336899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.038 [ 00:16:00.038 { 00:16:00.038 "name": "NewBaseBdev", 00:16:00.038 "aliases": [ 00:16:00.038 "cdcf9067-a6ce-445c-8d51-ab15c28dda87" 00:16:00.038 ], 00:16:00.038 "product_name": "Malloc disk", 00:16:00.038 "block_size": 512, 00:16:00.038 "num_blocks": 65536, 00:16:00.038 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:16:00.038 "assigned_rate_limits": { 00:16:00.038 "rw_ios_per_sec": 0, 00:16:00.038 "rw_mbytes_per_sec": 0, 00:16:00.038 "r_mbytes_per_sec": 0, 00:16:00.038 "w_mbytes_per_sec": 0 00:16:00.038 }, 00:16:00.038 "claimed": true, 00:16:00.038 "claim_type": "exclusive_write", 00:16:00.038 "zoned": false, 00:16:00.038 "supported_io_types": { 00:16:00.038 "read": true, 00:16:00.038 "write": true, 00:16:00.038 "unmap": true, 00:16:00.038 "flush": true, 00:16:00.038 "reset": true, 00:16:00.038 "nvme_admin": false, 00:16:00.038 "nvme_io": false, 00:16:00.038 "nvme_io_md": false, 00:16:00.038 "write_zeroes": true, 00:16:00.038 "zcopy": true, 00:16:00.038 "get_zone_info": false, 00:16:00.038 "zone_management": false, 00:16:00.038 "zone_append": false, 00:16:00.038 "compare": false, 00:16:00.038 "compare_and_write": false, 00:16:00.038 "abort": true, 00:16:00.038 "seek_hole": false, 00:16:00.038 "seek_data": false, 00:16:00.038 "copy": true, 00:16:00.038 "nvme_iov_md": false 00:16:00.038 }, 00:16:00.038 "memory_domains": [ 00:16:00.038 { 00:16:00.038 "dma_device_id": "system", 00:16:00.038 "dma_device_type": 1 00:16:00.038 }, 00:16:00.038 { 00:16:00.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.038 "dma_device_type": 2 00:16:00.038 } 00:16:00.038 ], 00:16:00.038 "driver_specific": {} 00:16:00.038 } 00:16:00.038 ] 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:00.038 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.039 "name": "Existed_Raid", 00:16:00.039 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:16:00.039 "strip_size_kb": 64, 00:16:00.039 "state": "online", 00:16:00.039 "raid_level": "raid5f", 00:16:00.039 "superblock": true, 00:16:00.039 "num_base_bdevs": 4, 00:16:00.039 "num_base_bdevs_discovered": 4, 00:16:00.039 "num_base_bdevs_operational": 4, 00:16:00.039 "base_bdevs_list": [ 00:16:00.039 { 00:16:00.039 "name": "NewBaseBdev", 00:16:00.039 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:16:00.039 "is_configured": true, 00:16:00.039 "data_offset": 2048, 00:16:00.039 "data_size": 63488 00:16:00.039 }, 00:16:00.039 { 00:16:00.039 "name": "BaseBdev2", 00:16:00.039 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:16:00.039 "is_configured": true, 00:16:00.039 "data_offset": 2048, 00:16:00.039 "data_size": 63488 00:16:00.039 }, 00:16:00.039 { 00:16:00.039 "name": "BaseBdev3", 00:16:00.039 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:16:00.039 "is_configured": true, 00:16:00.039 "data_offset": 2048, 00:16:00.039 "data_size": 63488 00:16:00.039 }, 00:16:00.039 { 00:16:00.039 "name": "BaseBdev4", 00:16:00.039 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:16:00.039 "is_configured": true, 00:16:00.039 "data_offset": 2048, 00:16:00.039 "data_size": 63488 00:16:00.039 } 00:16:00.039 ] 00:16:00.039 }' 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.039 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.298 [2024-11-26 20:28:53.827232] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.298 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.557 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.557 "name": "Existed_Raid", 00:16:00.557 "aliases": [ 00:16:00.557 "11186947-6632-4059-b698-2e44841f58a5" 00:16:00.557 ], 00:16:00.557 "product_name": "Raid Volume", 00:16:00.557 "block_size": 512, 00:16:00.557 "num_blocks": 190464, 00:16:00.557 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:16:00.557 "assigned_rate_limits": { 00:16:00.557 "rw_ios_per_sec": 0, 00:16:00.557 "rw_mbytes_per_sec": 0, 00:16:00.557 "r_mbytes_per_sec": 0, 00:16:00.557 "w_mbytes_per_sec": 0 00:16:00.557 }, 00:16:00.558 "claimed": false, 00:16:00.558 "zoned": false, 00:16:00.558 "supported_io_types": { 00:16:00.558 "read": true, 00:16:00.558 "write": true, 00:16:00.558 "unmap": false, 00:16:00.558 "flush": false, 00:16:00.558 "reset": true, 00:16:00.558 "nvme_admin": false, 00:16:00.558 "nvme_io": false, 00:16:00.558 "nvme_io_md": false, 00:16:00.558 "write_zeroes": true, 00:16:00.558 "zcopy": false, 00:16:00.558 "get_zone_info": false, 00:16:00.558 "zone_management": false, 00:16:00.558 "zone_append": false, 00:16:00.558 "compare": false, 00:16:00.558 "compare_and_write": false, 00:16:00.558 "abort": false, 00:16:00.558 "seek_hole": false, 00:16:00.558 "seek_data": false, 00:16:00.558 "copy": false, 00:16:00.558 "nvme_iov_md": false 00:16:00.558 }, 00:16:00.558 "driver_specific": { 00:16:00.558 "raid": { 00:16:00.558 "uuid": "11186947-6632-4059-b698-2e44841f58a5", 00:16:00.558 "strip_size_kb": 64, 00:16:00.558 "state": "online", 00:16:00.558 "raid_level": "raid5f", 00:16:00.558 "superblock": true, 00:16:00.558 "num_base_bdevs": 4, 00:16:00.558 "num_base_bdevs_discovered": 4, 00:16:00.558 "num_base_bdevs_operational": 4, 00:16:00.558 "base_bdevs_list": [ 00:16:00.558 { 00:16:00.558 "name": "NewBaseBdev", 00:16:00.558 "uuid": "cdcf9067-a6ce-445c-8d51-ab15c28dda87", 00:16:00.558 "is_configured": true, 00:16:00.558 "data_offset": 2048, 00:16:00.558 "data_size": 63488 00:16:00.558 }, 00:16:00.558 { 00:16:00.558 "name": "BaseBdev2", 00:16:00.558 "uuid": "62c9bed9-72cb-4f9d-84a0-0ef24b3a923e", 00:16:00.558 "is_configured": true, 00:16:00.558 "data_offset": 2048, 00:16:00.558 "data_size": 63488 00:16:00.558 }, 00:16:00.558 { 00:16:00.558 "name": "BaseBdev3", 00:16:00.558 "uuid": "112ed016-c73b-446e-bf97-7e54b324b8b5", 00:16:00.558 "is_configured": true, 00:16:00.558 "data_offset": 2048, 00:16:00.558 "data_size": 63488 00:16:00.558 }, 00:16:00.558 { 00:16:00.558 "name": "BaseBdev4", 00:16:00.558 "uuid": "a30a2e31-d5e8-4287-addd-94ab9efa4d02", 00:16:00.558 "is_configured": true, 00:16:00.558 "data_offset": 2048, 00:16:00.558 "data_size": 63488 00:16:00.558 } 00:16:00.558 ] 00:16:00.558 } 00:16:00.558 } 00:16:00.558 }' 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:00.558 BaseBdev2 00:16:00.558 BaseBdev3 00:16:00.558 BaseBdev4' 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.558 20:28:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.558 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.818 [2024-11-26 20:28:54.122490] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.818 [2024-11-26 20:28:54.122522] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.818 [2024-11-26 20:28:54.122607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.818 [2024-11-26 20:28:54.122931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.818 [2024-11-26 20:28:54.122950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94489 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94489 ']' 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94489 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94489 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94489' 00:16:00.818 killing process with pid 94489 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94489 00:16:00.818 [2024-11-26 20:28:54.156223] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.818 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94489 00:16:00.818 [2024-11-26 20:28:54.223226] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.203 20:28:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:01.203 00:16:01.203 real 0m9.670s 00:16:01.203 user 0m16.376s 00:16:01.203 sys 0m2.032s 00:16:01.203 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.203 20:28:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.203 ************************************ 00:16:01.203 END TEST raid5f_state_function_test_sb 00:16:01.203 ************************************ 00:16:01.203 20:28:54 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:01.203 20:28:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:01.203 20:28:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.203 20:28:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.203 ************************************ 00:16:01.203 START TEST raid5f_superblock_test 00:16:01.203 ************************************ 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=95133 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 95133 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 95133 ']' 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.203 20:28:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.203 [2024-11-26 20:28:54.719314] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:01.203 [2024-11-26 20:28:54.719456] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95133 ] 00:16:01.463 [2024-11-26 20:28:54.870893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.463 [2024-11-26 20:28:54.954577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.722 [2024-11-26 20:28:55.030239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.722 [2024-11-26 20:28:55.030294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.290 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.290 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:02.290 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:02.290 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.290 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:02.290 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:02.290 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:02.290 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 malloc1 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 [2024-11-26 20:28:55.634150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.291 [2024-11-26 20:28:55.634217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.291 [2024-11-26 20:28:55.634239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:02.291 [2024-11-26 20:28:55.634255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.291 [2024-11-26 20:28:55.636413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.291 [2024-11-26 20:28:55.636466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.291 pt1 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 malloc2 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 [2024-11-26 20:28:55.675123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.291 [2024-11-26 20:28:55.675184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.291 [2024-11-26 20:28:55.675203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:02.291 [2024-11-26 20:28:55.675215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.291 [2024-11-26 20:28:55.677762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.291 [2024-11-26 20:28:55.677796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.291 pt2 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 malloc3 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 [2024-11-26 20:28:55.709374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:02.291 [2024-11-26 20:28:55.709429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.291 [2024-11-26 20:28:55.709449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.291 [2024-11-26 20:28:55.709462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.291 [2024-11-26 20:28:55.711740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.291 [2024-11-26 20:28:55.711771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:02.291 pt3 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 malloc4 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 [2024-11-26 20:28:55.740769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:02.291 [2024-11-26 20:28:55.740822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.291 [2024-11-26 20:28:55.740837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:02.291 [2024-11-26 20:28:55.740850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.291 [2024-11-26 20:28:55.742971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.291 [2024-11-26 20:28:55.743005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:02.291 pt4 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 [2024-11-26 20:28:55.752817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.291 [2024-11-26 20:28:55.754625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.291 [2024-11-26 20:28:55.754695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:02.291 [2024-11-26 20:28:55.754757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:02.291 [2024-11-26 20:28:55.754938] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:02.291 [2024-11-26 20:28:55.754957] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.291 [2024-11-26 20:28:55.755221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:02.291 [2024-11-26 20:28:55.755704] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:02.291 [2024-11-26 20:28:55.755727] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:02.291 [2024-11-26 20:28:55.755854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.291 "name": "raid_bdev1", 00:16:02.291 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:02.291 "strip_size_kb": 64, 00:16:02.291 "state": "online", 00:16:02.291 "raid_level": "raid5f", 00:16:02.291 "superblock": true, 00:16:02.291 "num_base_bdevs": 4, 00:16:02.291 "num_base_bdevs_discovered": 4, 00:16:02.291 "num_base_bdevs_operational": 4, 00:16:02.291 "base_bdevs_list": [ 00:16:02.291 { 00:16:02.291 "name": "pt1", 00:16:02.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.291 "is_configured": true, 00:16:02.291 "data_offset": 2048, 00:16:02.291 "data_size": 63488 00:16:02.291 }, 00:16:02.291 { 00:16:02.291 "name": "pt2", 00:16:02.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.291 "is_configured": true, 00:16:02.291 "data_offset": 2048, 00:16:02.291 "data_size": 63488 00:16:02.291 }, 00:16:02.291 { 00:16:02.291 "name": "pt3", 00:16:02.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.291 "is_configured": true, 00:16:02.291 "data_offset": 2048, 00:16:02.291 "data_size": 63488 00:16:02.291 }, 00:16:02.291 { 00:16:02.291 "name": "pt4", 00:16:02.291 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.291 "is_configured": true, 00:16:02.291 "data_offset": 2048, 00:16:02.291 "data_size": 63488 00:16:02.291 } 00:16:02.291 ] 00:16:02.291 }' 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.291 20:28:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.859 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.860 [2024-11-26 20:28:56.218244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:02.860 "name": "raid_bdev1", 00:16:02.860 "aliases": [ 00:16:02.860 "02ad9e92-4ab0-4158-99dd-c41262e69286" 00:16:02.860 ], 00:16:02.860 "product_name": "Raid Volume", 00:16:02.860 "block_size": 512, 00:16:02.860 "num_blocks": 190464, 00:16:02.860 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:02.860 "assigned_rate_limits": { 00:16:02.860 "rw_ios_per_sec": 0, 00:16:02.860 "rw_mbytes_per_sec": 0, 00:16:02.860 "r_mbytes_per_sec": 0, 00:16:02.860 "w_mbytes_per_sec": 0 00:16:02.860 }, 00:16:02.860 "claimed": false, 00:16:02.860 "zoned": false, 00:16:02.860 "supported_io_types": { 00:16:02.860 "read": true, 00:16:02.860 "write": true, 00:16:02.860 "unmap": false, 00:16:02.860 "flush": false, 00:16:02.860 "reset": true, 00:16:02.860 "nvme_admin": false, 00:16:02.860 "nvme_io": false, 00:16:02.860 "nvme_io_md": false, 00:16:02.860 "write_zeroes": true, 00:16:02.860 "zcopy": false, 00:16:02.860 "get_zone_info": false, 00:16:02.860 "zone_management": false, 00:16:02.860 "zone_append": false, 00:16:02.860 "compare": false, 00:16:02.860 "compare_and_write": false, 00:16:02.860 "abort": false, 00:16:02.860 "seek_hole": false, 00:16:02.860 "seek_data": false, 00:16:02.860 "copy": false, 00:16:02.860 "nvme_iov_md": false 00:16:02.860 }, 00:16:02.860 "driver_specific": { 00:16:02.860 "raid": { 00:16:02.860 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:02.860 "strip_size_kb": 64, 00:16:02.860 "state": "online", 00:16:02.860 "raid_level": "raid5f", 00:16:02.860 "superblock": true, 00:16:02.860 "num_base_bdevs": 4, 00:16:02.860 "num_base_bdevs_discovered": 4, 00:16:02.860 "num_base_bdevs_operational": 4, 00:16:02.860 "base_bdevs_list": [ 00:16:02.860 { 00:16:02.860 "name": "pt1", 00:16:02.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.860 "is_configured": true, 00:16:02.860 "data_offset": 2048, 00:16:02.860 "data_size": 63488 00:16:02.860 }, 00:16:02.860 { 00:16:02.860 "name": "pt2", 00:16:02.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.860 "is_configured": true, 00:16:02.860 "data_offset": 2048, 00:16:02.860 "data_size": 63488 00:16:02.860 }, 00:16:02.860 { 00:16:02.860 "name": "pt3", 00:16:02.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.860 "is_configured": true, 00:16:02.860 "data_offset": 2048, 00:16:02.860 "data_size": 63488 00:16:02.860 }, 00:16:02.860 { 00:16:02.860 "name": "pt4", 00:16:02.860 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.860 "is_configured": true, 00:16:02.860 "data_offset": 2048, 00:16:02.860 "data_size": 63488 00:16:02.860 } 00:16:02.860 ] 00:16:02.860 } 00:16:02.860 } 00:16:02.860 }' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:02.860 pt2 00:16:02.860 pt3 00:16:02.860 pt4' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.860 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:03.119 [2024-11-26 20:28:56.517710] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=02ad9e92-4ab0-4158-99dd-c41262e69286 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 02ad9e92-4ab0-4158-99dd-c41262e69286 ']' 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.119 [2024-11-26 20:28:56.561430] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.119 [2024-11-26 20:28:56.561470] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.119 [2024-11-26 20:28:56.561570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.119 [2024-11-26 20:28:56.561687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.119 [2024-11-26 20:28:56.561705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:03.119 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.120 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.382 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.383 [2024-11-26 20:28:56.709238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:03.383 [2024-11-26 20:28:56.711152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:03.383 [2024-11-26 20:28:56.711207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:03.383 [2024-11-26 20:28:56.711237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:03.383 [2024-11-26 20:28:56.711287] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:03.383 [2024-11-26 20:28:56.711331] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:03.383 [2024-11-26 20:28:56.711350] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:03.383 [2024-11-26 20:28:56.711366] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:03.383 [2024-11-26 20:28:56.711381] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.383 [2024-11-26 20:28:56.711393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:03.383 request: 00:16:03.383 { 00:16:03.383 "name": "raid_bdev1", 00:16:03.383 "raid_level": "raid5f", 00:16:03.383 "base_bdevs": [ 00:16:03.383 "malloc1", 00:16:03.383 "malloc2", 00:16:03.383 "malloc3", 00:16:03.383 "malloc4" 00:16:03.383 ], 00:16:03.383 "strip_size_kb": 64, 00:16:03.383 "superblock": false, 00:16:03.383 "method": "bdev_raid_create", 00:16:03.383 "req_id": 1 00:16:03.383 } 00:16:03.383 Got JSON-RPC error response 00:16:03.383 response: 00:16:03.383 { 00:16:03.383 "code": -17, 00:16:03.383 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:03.383 } 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.383 [2024-11-26 20:28:56.765103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:03.383 [2024-11-26 20:28:56.765172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.383 [2024-11-26 20:28:56.765196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:03.383 [2024-11-26 20:28:56.765206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.383 [2024-11-26 20:28:56.767471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.383 [2024-11-26 20:28:56.767503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:03.383 [2024-11-26 20:28:56.767589] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:03.383 [2024-11-26 20:28:56.767648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.383 pt1 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.383 "name": "raid_bdev1", 00:16:03.383 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:03.383 "strip_size_kb": 64, 00:16:03.383 "state": "configuring", 00:16:03.383 "raid_level": "raid5f", 00:16:03.383 "superblock": true, 00:16:03.383 "num_base_bdevs": 4, 00:16:03.383 "num_base_bdevs_discovered": 1, 00:16:03.383 "num_base_bdevs_operational": 4, 00:16:03.383 "base_bdevs_list": [ 00:16:03.383 { 00:16:03.383 "name": "pt1", 00:16:03.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.383 "is_configured": true, 00:16:03.383 "data_offset": 2048, 00:16:03.383 "data_size": 63488 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "name": null, 00:16:03.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.383 "is_configured": false, 00:16:03.383 "data_offset": 2048, 00:16:03.383 "data_size": 63488 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "name": null, 00:16:03.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.383 "is_configured": false, 00:16:03.383 "data_offset": 2048, 00:16:03.383 "data_size": 63488 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "name": null, 00:16:03.383 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:03.383 "is_configured": false, 00:16:03.383 "data_offset": 2048, 00:16:03.383 "data_size": 63488 00:16:03.383 } 00:16:03.383 ] 00:16:03.383 }' 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.383 20:28:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.952 [2024-11-26 20:28:57.212324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.952 [2024-11-26 20:28:57.212390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.952 [2024-11-26 20:28:57.212412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:03.952 [2024-11-26 20:28:57.212422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.952 [2024-11-26 20:28:57.212879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.952 [2024-11-26 20:28:57.212899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.952 [2024-11-26 20:28:57.212992] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.952 [2024-11-26 20:28:57.213017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.952 pt2 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.952 [2024-11-26 20:28:57.224311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.952 "name": "raid_bdev1", 00:16:03.952 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:03.952 "strip_size_kb": 64, 00:16:03.952 "state": "configuring", 00:16:03.952 "raid_level": "raid5f", 00:16:03.952 "superblock": true, 00:16:03.952 "num_base_bdevs": 4, 00:16:03.952 "num_base_bdevs_discovered": 1, 00:16:03.952 "num_base_bdevs_operational": 4, 00:16:03.952 "base_bdevs_list": [ 00:16:03.952 { 00:16:03.952 "name": "pt1", 00:16:03.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.952 "is_configured": true, 00:16:03.952 "data_offset": 2048, 00:16:03.952 "data_size": 63488 00:16:03.952 }, 00:16:03.952 { 00:16:03.952 "name": null, 00:16:03.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.952 "is_configured": false, 00:16:03.952 "data_offset": 0, 00:16:03.952 "data_size": 63488 00:16:03.952 }, 00:16:03.952 { 00:16:03.952 "name": null, 00:16:03.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.952 "is_configured": false, 00:16:03.952 "data_offset": 2048, 00:16:03.952 "data_size": 63488 00:16:03.952 }, 00:16:03.952 { 00:16:03.952 "name": null, 00:16:03.952 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:03.952 "is_configured": false, 00:16:03.952 "data_offset": 2048, 00:16:03.952 "data_size": 63488 00:16:03.952 } 00:16:03.952 ] 00:16:03.952 }' 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.952 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.212 [2024-11-26 20:28:57.711504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.212 [2024-11-26 20:28:57.711575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.212 [2024-11-26 20:28:57.711593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:04.212 [2024-11-26 20:28:57.711605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.212 [2024-11-26 20:28:57.712028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.212 [2024-11-26 20:28:57.712048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.212 [2024-11-26 20:28:57.712121] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.212 [2024-11-26 20:28:57.712145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.212 pt2 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.212 [2024-11-26 20:28:57.723421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:04.212 [2024-11-26 20:28:57.723475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.212 [2024-11-26 20:28:57.723494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:04.212 [2024-11-26 20:28:57.723504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.212 [2024-11-26 20:28:57.723863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.212 [2024-11-26 20:28:57.723887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:04.212 [2024-11-26 20:28:57.723948] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:04.212 [2024-11-26 20:28:57.723968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:04.212 pt3 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.212 [2024-11-26 20:28:57.735396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:04.212 [2024-11-26 20:28:57.735446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.212 [2024-11-26 20:28:57.735462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:04.212 [2024-11-26 20:28:57.735471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.212 [2024-11-26 20:28:57.735800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.212 [2024-11-26 20:28:57.735819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:04.212 [2024-11-26 20:28:57.735873] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:04.212 [2024-11-26 20:28:57.735892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:04.212 [2024-11-26 20:28:57.735992] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:04.212 [2024-11-26 20:28:57.736004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.212 [2024-11-26 20:28:57.736225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:04.212 [2024-11-26 20:28:57.736772] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:04.212 [2024-11-26 20:28:57.736793] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:04.212 [2024-11-26 20:28:57.736921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.212 pt4 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.212 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.213 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.213 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.213 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.213 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.213 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.213 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.213 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.213 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.472 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.472 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.472 "name": "raid_bdev1", 00:16:04.472 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:04.472 "strip_size_kb": 64, 00:16:04.472 "state": "online", 00:16:04.472 "raid_level": "raid5f", 00:16:04.472 "superblock": true, 00:16:04.472 "num_base_bdevs": 4, 00:16:04.472 "num_base_bdevs_discovered": 4, 00:16:04.472 "num_base_bdevs_operational": 4, 00:16:04.472 "base_bdevs_list": [ 00:16:04.472 { 00:16:04.472 "name": "pt1", 00:16:04.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.472 "is_configured": true, 00:16:04.472 "data_offset": 2048, 00:16:04.472 "data_size": 63488 00:16:04.472 }, 00:16:04.472 { 00:16:04.472 "name": "pt2", 00:16:04.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.472 "is_configured": true, 00:16:04.472 "data_offset": 2048, 00:16:04.472 "data_size": 63488 00:16:04.472 }, 00:16:04.472 { 00:16:04.472 "name": "pt3", 00:16:04.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.472 "is_configured": true, 00:16:04.472 "data_offset": 2048, 00:16:04.472 "data_size": 63488 00:16:04.472 }, 00:16:04.472 { 00:16:04.472 "name": "pt4", 00:16:04.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.472 "is_configured": true, 00:16:04.472 "data_offset": 2048, 00:16:04.472 "data_size": 63488 00:16:04.472 } 00:16:04.472 ] 00:16:04.472 }' 00:16:04.472 20:28:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.472 20:28:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.731 [2024-11-26 20:28:58.147217] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.731 "name": "raid_bdev1", 00:16:04.731 "aliases": [ 00:16:04.731 "02ad9e92-4ab0-4158-99dd-c41262e69286" 00:16:04.731 ], 00:16:04.731 "product_name": "Raid Volume", 00:16:04.731 "block_size": 512, 00:16:04.731 "num_blocks": 190464, 00:16:04.731 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:04.731 "assigned_rate_limits": { 00:16:04.731 "rw_ios_per_sec": 0, 00:16:04.731 "rw_mbytes_per_sec": 0, 00:16:04.731 "r_mbytes_per_sec": 0, 00:16:04.731 "w_mbytes_per_sec": 0 00:16:04.731 }, 00:16:04.731 "claimed": false, 00:16:04.731 "zoned": false, 00:16:04.731 "supported_io_types": { 00:16:04.731 "read": true, 00:16:04.731 "write": true, 00:16:04.731 "unmap": false, 00:16:04.731 "flush": false, 00:16:04.731 "reset": true, 00:16:04.731 "nvme_admin": false, 00:16:04.731 "nvme_io": false, 00:16:04.731 "nvme_io_md": false, 00:16:04.731 "write_zeroes": true, 00:16:04.731 "zcopy": false, 00:16:04.731 "get_zone_info": false, 00:16:04.731 "zone_management": false, 00:16:04.731 "zone_append": false, 00:16:04.731 "compare": false, 00:16:04.731 "compare_and_write": false, 00:16:04.731 "abort": false, 00:16:04.731 "seek_hole": false, 00:16:04.731 "seek_data": false, 00:16:04.731 "copy": false, 00:16:04.731 "nvme_iov_md": false 00:16:04.731 }, 00:16:04.731 "driver_specific": { 00:16:04.731 "raid": { 00:16:04.731 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:04.731 "strip_size_kb": 64, 00:16:04.731 "state": "online", 00:16:04.731 "raid_level": "raid5f", 00:16:04.731 "superblock": true, 00:16:04.731 "num_base_bdevs": 4, 00:16:04.731 "num_base_bdevs_discovered": 4, 00:16:04.731 "num_base_bdevs_operational": 4, 00:16:04.731 "base_bdevs_list": [ 00:16:04.731 { 00:16:04.731 "name": "pt1", 00:16:04.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.731 "is_configured": true, 00:16:04.731 "data_offset": 2048, 00:16:04.731 "data_size": 63488 00:16:04.731 }, 00:16:04.731 { 00:16:04.731 "name": "pt2", 00:16:04.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.731 "is_configured": true, 00:16:04.731 "data_offset": 2048, 00:16:04.731 "data_size": 63488 00:16:04.731 }, 00:16:04.731 { 00:16:04.731 "name": "pt3", 00:16:04.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.731 "is_configured": true, 00:16:04.731 "data_offset": 2048, 00:16:04.731 "data_size": 63488 00:16:04.731 }, 00:16:04.731 { 00:16:04.731 "name": "pt4", 00:16:04.731 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:04.731 "is_configured": true, 00:16:04.731 "data_offset": 2048, 00:16:04.731 "data_size": 63488 00:16:04.731 } 00:16:04.731 ] 00:16:04.731 } 00:16:04.731 } 00:16:04.731 }' 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:04.731 pt2 00:16:04.731 pt3 00:16:04.731 pt4' 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.731 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.991 [2024-11-26 20:28:58.494633] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 02ad9e92-4ab0-4158-99dd-c41262e69286 '!=' 02ad9e92-4ab0-4158-99dd-c41262e69286 ']' 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.991 [2024-11-26 20:28:58.530416] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.991 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.251 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.251 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.251 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.251 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.251 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.251 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.251 "name": "raid_bdev1", 00:16:05.251 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:05.251 "strip_size_kb": 64, 00:16:05.251 "state": "online", 00:16:05.251 "raid_level": "raid5f", 00:16:05.251 "superblock": true, 00:16:05.251 "num_base_bdevs": 4, 00:16:05.251 "num_base_bdevs_discovered": 3, 00:16:05.251 "num_base_bdevs_operational": 3, 00:16:05.251 "base_bdevs_list": [ 00:16:05.251 { 00:16:05.251 "name": null, 00:16:05.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.251 "is_configured": false, 00:16:05.251 "data_offset": 0, 00:16:05.251 "data_size": 63488 00:16:05.251 }, 00:16:05.251 { 00:16:05.251 "name": "pt2", 00:16:05.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.251 "is_configured": true, 00:16:05.251 "data_offset": 2048, 00:16:05.251 "data_size": 63488 00:16:05.251 }, 00:16:05.251 { 00:16:05.251 "name": "pt3", 00:16:05.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.251 "is_configured": true, 00:16:05.251 "data_offset": 2048, 00:16:05.251 "data_size": 63488 00:16:05.251 }, 00:16:05.251 { 00:16:05.251 "name": "pt4", 00:16:05.251 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.251 "is_configured": true, 00:16:05.251 "data_offset": 2048, 00:16:05.251 "data_size": 63488 00:16:05.251 } 00:16:05.251 ] 00:16:05.251 }' 00:16:05.251 20:28:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.251 20:28:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.511 [2024-11-26 20:28:59.045448] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.511 [2024-11-26 20:28:59.045484] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.511 [2024-11-26 20:28:59.045574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.511 [2024-11-26 20:28:59.045659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.511 [2024-11-26 20:28:59.045672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:05.511 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.772 [2024-11-26 20:28:59.181212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:05.772 [2024-11-26 20:28:59.181285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.772 [2024-11-26 20:28:59.181306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:05.772 [2024-11-26 20:28:59.181317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.772 [2024-11-26 20:28:59.183525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.772 [2024-11-26 20:28:59.183564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:05.772 [2024-11-26 20:28:59.183658] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:05.772 [2024-11-26 20:28:59.183696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.772 pt2 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.772 "name": "raid_bdev1", 00:16:05.772 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:05.772 "strip_size_kb": 64, 00:16:05.772 "state": "configuring", 00:16:05.772 "raid_level": "raid5f", 00:16:05.772 "superblock": true, 00:16:05.772 "num_base_bdevs": 4, 00:16:05.772 "num_base_bdevs_discovered": 1, 00:16:05.772 "num_base_bdevs_operational": 3, 00:16:05.772 "base_bdevs_list": [ 00:16:05.772 { 00:16:05.772 "name": null, 00:16:05.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.772 "is_configured": false, 00:16:05.772 "data_offset": 2048, 00:16:05.772 "data_size": 63488 00:16:05.772 }, 00:16:05.772 { 00:16:05.772 "name": "pt2", 00:16:05.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.772 "is_configured": true, 00:16:05.772 "data_offset": 2048, 00:16:05.772 "data_size": 63488 00:16:05.772 }, 00:16:05.772 { 00:16:05.772 "name": null, 00:16:05.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.772 "is_configured": false, 00:16:05.772 "data_offset": 2048, 00:16:05.772 "data_size": 63488 00:16:05.772 }, 00:16:05.772 { 00:16:05.772 "name": null, 00:16:05.772 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:05.772 "is_configured": false, 00:16:05.772 "data_offset": 2048, 00:16:05.772 "data_size": 63488 00:16:05.772 } 00:16:05.772 ] 00:16:05.772 }' 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.772 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.341 [2024-11-26 20:28:59.672472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:06.341 [2024-11-26 20:28:59.672541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.341 [2024-11-26 20:28:59.672561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:06.341 [2024-11-26 20:28:59.672574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.341 [2024-11-26 20:28:59.673041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.341 [2024-11-26 20:28:59.673064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:06.341 [2024-11-26 20:28:59.673147] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:06.341 [2024-11-26 20:28:59.673182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:06.341 pt3 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.341 "name": "raid_bdev1", 00:16:06.341 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:06.341 "strip_size_kb": 64, 00:16:06.341 "state": "configuring", 00:16:06.341 "raid_level": "raid5f", 00:16:06.341 "superblock": true, 00:16:06.341 "num_base_bdevs": 4, 00:16:06.341 "num_base_bdevs_discovered": 2, 00:16:06.341 "num_base_bdevs_operational": 3, 00:16:06.341 "base_bdevs_list": [ 00:16:06.341 { 00:16:06.341 "name": null, 00:16:06.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.341 "is_configured": false, 00:16:06.341 "data_offset": 2048, 00:16:06.341 "data_size": 63488 00:16:06.341 }, 00:16:06.341 { 00:16:06.341 "name": "pt2", 00:16:06.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.341 "is_configured": true, 00:16:06.341 "data_offset": 2048, 00:16:06.341 "data_size": 63488 00:16:06.341 }, 00:16:06.341 { 00:16:06.341 "name": "pt3", 00:16:06.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.341 "is_configured": true, 00:16:06.341 "data_offset": 2048, 00:16:06.341 "data_size": 63488 00:16:06.341 }, 00:16:06.341 { 00:16:06.341 "name": null, 00:16:06.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:06.341 "is_configured": false, 00:16:06.341 "data_offset": 2048, 00:16:06.341 "data_size": 63488 00:16:06.341 } 00:16:06.341 ] 00:16:06.341 }' 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.341 20:28:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.601 [2024-11-26 20:29:00.059785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:06.601 [2024-11-26 20:29:00.059857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.601 [2024-11-26 20:29:00.059883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:06.601 [2024-11-26 20:29:00.059893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.601 [2024-11-26 20:29:00.060291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.601 [2024-11-26 20:29:00.060309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:06.601 [2024-11-26 20:29:00.060386] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:06.601 [2024-11-26 20:29:00.060408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:06.601 [2024-11-26 20:29:00.060502] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:06.601 [2024-11-26 20:29:00.060512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:06.601 [2024-11-26 20:29:00.060759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:06.601 [2024-11-26 20:29:00.061346] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:06.601 [2024-11-26 20:29:00.061369] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:06.601 [2024-11-26 20:29:00.061606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.601 pt4 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.601 "name": "raid_bdev1", 00:16:06.601 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:06.601 "strip_size_kb": 64, 00:16:06.601 "state": "online", 00:16:06.601 "raid_level": "raid5f", 00:16:06.601 "superblock": true, 00:16:06.601 "num_base_bdevs": 4, 00:16:06.601 "num_base_bdevs_discovered": 3, 00:16:06.601 "num_base_bdevs_operational": 3, 00:16:06.601 "base_bdevs_list": [ 00:16:06.601 { 00:16:06.601 "name": null, 00:16:06.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.601 "is_configured": false, 00:16:06.601 "data_offset": 2048, 00:16:06.601 "data_size": 63488 00:16:06.601 }, 00:16:06.601 { 00:16:06.601 "name": "pt2", 00:16:06.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.601 "is_configured": true, 00:16:06.601 "data_offset": 2048, 00:16:06.601 "data_size": 63488 00:16:06.601 }, 00:16:06.601 { 00:16:06.601 "name": "pt3", 00:16:06.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.601 "is_configured": true, 00:16:06.601 "data_offset": 2048, 00:16:06.601 "data_size": 63488 00:16:06.601 }, 00:16:06.601 { 00:16:06.601 "name": "pt4", 00:16:06.601 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:06.601 "is_configured": true, 00:16:06.601 "data_offset": 2048, 00:16:06.601 "data_size": 63488 00:16:06.601 } 00:16:06.601 ] 00:16:06.601 }' 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.601 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 [2024-11-26 20:29:00.515593] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.172 [2024-11-26 20:29:00.515638] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.172 [2024-11-26 20:29:00.515720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.172 [2024-11-26 20:29:00.515800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.172 [2024-11-26 20:29:00.515815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 [2024-11-26 20:29:00.587452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.172 [2024-11-26 20:29:00.587511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.172 [2024-11-26 20:29:00.587531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:07.172 [2024-11-26 20:29:00.587541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.172 [2024-11-26 20:29:00.590055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.172 [2024-11-26 20:29:00.590091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.172 [2024-11-26 20:29:00.590183] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.172 [2024-11-26 20:29:00.590232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.172 [2024-11-26 20:29:00.590359] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:07.172 [2024-11-26 20:29:00.590378] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.172 [2024-11-26 20:29:00.590399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:07.172 [2024-11-26 20:29:00.590434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.172 [2024-11-26 20:29:00.590558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:07.172 pt1 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.172 "name": "raid_bdev1", 00:16:07.172 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:07.172 "strip_size_kb": 64, 00:16:07.172 "state": "configuring", 00:16:07.172 "raid_level": "raid5f", 00:16:07.172 "superblock": true, 00:16:07.172 "num_base_bdevs": 4, 00:16:07.172 "num_base_bdevs_discovered": 2, 00:16:07.172 "num_base_bdevs_operational": 3, 00:16:07.172 "base_bdevs_list": [ 00:16:07.172 { 00:16:07.172 "name": null, 00:16:07.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.172 "is_configured": false, 00:16:07.172 "data_offset": 2048, 00:16:07.172 "data_size": 63488 00:16:07.172 }, 00:16:07.172 { 00:16:07.172 "name": "pt2", 00:16:07.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.172 "is_configured": true, 00:16:07.172 "data_offset": 2048, 00:16:07.172 "data_size": 63488 00:16:07.172 }, 00:16:07.172 { 00:16:07.172 "name": "pt3", 00:16:07.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.172 "is_configured": true, 00:16:07.172 "data_offset": 2048, 00:16:07.172 "data_size": 63488 00:16:07.172 }, 00:16:07.172 { 00:16:07.172 "name": null, 00:16:07.172 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.172 "is_configured": false, 00:16:07.172 "data_offset": 2048, 00:16:07.172 "data_size": 63488 00:16:07.172 } 00:16:07.172 ] 00:16:07.172 }' 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.172 20:29:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.818 [2024-11-26 20:29:01.098611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:07.818 [2024-11-26 20:29:01.098688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.818 [2024-11-26 20:29:01.098712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:07.818 [2024-11-26 20:29:01.098725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.818 [2024-11-26 20:29:01.099210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.818 [2024-11-26 20:29:01.099241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:07.818 [2024-11-26 20:29:01.099328] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:07.818 [2024-11-26 20:29:01.099358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:07.818 [2024-11-26 20:29:01.099476] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:07.818 [2024-11-26 20:29:01.099492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:07.818 [2024-11-26 20:29:01.099796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:07.818 [2024-11-26 20:29:01.100468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:07.818 [2024-11-26 20:29:01.100493] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:07.818 [2024-11-26 20:29:01.100729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.818 pt4 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.818 "name": "raid_bdev1", 00:16:07.818 "uuid": "02ad9e92-4ab0-4158-99dd-c41262e69286", 00:16:07.818 "strip_size_kb": 64, 00:16:07.818 "state": "online", 00:16:07.818 "raid_level": "raid5f", 00:16:07.818 "superblock": true, 00:16:07.818 "num_base_bdevs": 4, 00:16:07.818 "num_base_bdevs_discovered": 3, 00:16:07.818 "num_base_bdevs_operational": 3, 00:16:07.818 "base_bdevs_list": [ 00:16:07.818 { 00:16:07.818 "name": null, 00:16:07.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.818 "is_configured": false, 00:16:07.818 "data_offset": 2048, 00:16:07.818 "data_size": 63488 00:16:07.818 }, 00:16:07.818 { 00:16:07.818 "name": "pt2", 00:16:07.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.818 "is_configured": true, 00:16:07.818 "data_offset": 2048, 00:16:07.818 "data_size": 63488 00:16:07.818 }, 00:16:07.818 { 00:16:07.818 "name": "pt3", 00:16:07.818 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.818 "is_configured": true, 00:16:07.818 "data_offset": 2048, 00:16:07.818 "data_size": 63488 00:16:07.818 }, 00:16:07.818 { 00:16:07.818 "name": "pt4", 00:16:07.818 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:07.818 "is_configured": true, 00:16:07.818 "data_offset": 2048, 00:16:07.818 "data_size": 63488 00:16:07.818 } 00:16:07.818 ] 00:16:07.818 }' 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.818 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:08.078 [2024-11-26 20:29:01.595133] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.078 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.336 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 02ad9e92-4ab0-4158-99dd-c41262e69286 '!=' 02ad9e92-4ab0-4158-99dd-c41262e69286 ']' 00:16:08.336 20:29:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 95133 00:16:08.336 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 95133 ']' 00:16:08.336 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 95133 00:16:08.336 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:08.336 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:08.337 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95133 00:16:08.337 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:08.337 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:08.337 killing process with pid 95133 00:16:08.337 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95133' 00:16:08.337 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 95133 00:16:08.337 [2024-11-26 20:29:01.683918] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.337 [2024-11-26 20:29:01.684063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.337 20:29:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 95133 00:16:08.337 [2024-11-26 20:29:01.684192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.337 [2024-11-26 20:29:01.684210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:08.337 [2024-11-26 20:29:01.756758] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.595 20:29:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:08.595 00:16:08.595 real 0m7.486s 00:16:08.595 user 0m12.527s 00:16:08.595 sys 0m1.585s 00:16:08.595 20:29:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.595 20:29:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.595 ************************************ 00:16:08.595 END TEST raid5f_superblock_test 00:16:08.595 ************************************ 00:16:08.854 20:29:02 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:08.854 20:29:02 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:08.854 20:29:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:08.854 20:29:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.854 20:29:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.854 ************************************ 00:16:08.854 START TEST raid5f_rebuild_test 00:16:08.854 ************************************ 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:08.854 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95607 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95607 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95607 ']' 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.855 20:29:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.855 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:08.855 Zero copy mechanism will not be used. 00:16:08.855 [2024-11-26 20:29:02.291393] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:08.855 [2024-11-26 20:29:02.291525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95607 ] 00:16:09.114 [2024-11-26 20:29:02.453868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.114 [2024-11-26 20:29:02.539411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.114 [2024-11-26 20:29:02.616730] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.114 [2024-11-26 20:29:02.616793] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.681 BaseBdev1_malloc 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.681 [2024-11-26 20:29:03.182391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.681 [2024-11-26 20:29:03.182480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.681 [2024-11-26 20:29:03.182521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:09.681 [2024-11-26 20:29:03.182565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.681 [2024-11-26 20:29:03.186059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.681 [2024-11-26 20:29:03.186105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.681 BaseBdev1 00:16:09.681 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.682 BaseBdev2_malloc 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.682 [2024-11-26 20:29:03.226380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:09.682 [2024-11-26 20:29:03.226438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.682 [2024-11-26 20:29:03.226462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:09.682 [2024-11-26 20:29:03.226473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.682 [2024-11-26 20:29:03.228628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.682 [2024-11-26 20:29:03.228660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.682 BaseBdev2 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.682 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 BaseBdev3_malloc 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 [2024-11-26 20:29:03.260204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:09.942 [2024-11-26 20:29:03.260248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.942 [2024-11-26 20:29:03.260283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:09.942 [2024-11-26 20:29:03.260291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.942 [2024-11-26 20:29:03.262371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.942 [2024-11-26 20:29:03.262403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:09.942 BaseBdev3 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 BaseBdev4_malloc 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 [2024-11-26 20:29:03.291036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:09.942 [2024-11-26 20:29:03.291089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.942 [2024-11-26 20:29:03.291112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:09.942 [2024-11-26 20:29:03.291121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.942 [2024-11-26 20:29:03.293201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.942 [2024-11-26 20:29:03.293233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:09.942 BaseBdev4 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 spare_malloc 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 spare_delay 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 [2024-11-26 20:29:03.337818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.942 [2024-11-26 20:29:03.337881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.942 [2024-11-26 20:29:03.337907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:09.942 [2024-11-26 20:29:03.337917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.942 [2024-11-26 20:29:03.340121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.942 [2024-11-26 20:29:03.340156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.942 spare 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 [2024-11-26 20:29:03.349876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.942 [2024-11-26 20:29:03.351683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.942 [2024-11-26 20:29:03.351753] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.942 [2024-11-26 20:29:03.351791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.942 [2024-11-26 20:29:03.351872] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:09.942 [2024-11-26 20:29:03.351887] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:09.942 [2024-11-26 20:29:03.352151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:09.942 [2024-11-26 20:29:03.352607] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:09.942 [2024-11-26 20:29:03.352637] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:09.942 [2024-11-26 20:29:03.352768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.942 "name": "raid_bdev1", 00:16:09.942 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:09.942 "strip_size_kb": 64, 00:16:09.942 "state": "online", 00:16:09.942 "raid_level": "raid5f", 00:16:09.942 "superblock": false, 00:16:09.942 "num_base_bdevs": 4, 00:16:09.942 "num_base_bdevs_discovered": 4, 00:16:09.942 "num_base_bdevs_operational": 4, 00:16:09.942 "base_bdevs_list": [ 00:16:09.942 { 00:16:09.942 "name": "BaseBdev1", 00:16:09.942 "uuid": "ba49a970-cdbf-5083-bb30-7e0e463373d1", 00:16:09.942 "is_configured": true, 00:16:09.942 "data_offset": 0, 00:16:09.942 "data_size": 65536 00:16:09.942 }, 00:16:09.942 { 00:16:09.942 "name": "BaseBdev2", 00:16:09.942 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:09.942 "is_configured": true, 00:16:09.942 "data_offset": 0, 00:16:09.942 "data_size": 65536 00:16:09.942 }, 00:16:09.942 { 00:16:09.942 "name": "BaseBdev3", 00:16:09.942 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:09.942 "is_configured": true, 00:16:09.942 "data_offset": 0, 00:16:09.942 "data_size": 65536 00:16:09.942 }, 00:16:09.942 { 00:16:09.942 "name": "BaseBdev4", 00:16:09.942 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:09.942 "is_configured": true, 00:16:09.942 "data_offset": 0, 00:16:09.942 "data_size": 65536 00:16:09.942 } 00:16:09.942 ] 00:16:09.942 }' 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.942 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.511 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.512 [2024-11-26 20:29:03.823061] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.512 20:29:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:10.771 [2024-11-26 20:29:04.110430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:10.771 /dev/nbd0 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.771 1+0 records in 00:16:10.771 1+0 records out 00:16:10.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416763 s, 9.8 MB/s 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:10.771 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:11.340 512+0 records in 00:16:11.340 512+0 records out 00:16:11.340 100663296 bytes (101 MB, 96 MiB) copied, 0.431238 s, 233 MB/s 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.340 [2024-11-26 20:29:04.849788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.340 [2024-11-26 20:29:04.864529] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.340 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.600 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.600 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.600 "name": "raid_bdev1", 00:16:11.600 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:11.600 "strip_size_kb": 64, 00:16:11.600 "state": "online", 00:16:11.600 "raid_level": "raid5f", 00:16:11.600 "superblock": false, 00:16:11.600 "num_base_bdevs": 4, 00:16:11.600 "num_base_bdevs_discovered": 3, 00:16:11.600 "num_base_bdevs_operational": 3, 00:16:11.600 "base_bdevs_list": [ 00:16:11.600 { 00:16:11.600 "name": null, 00:16:11.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.600 "is_configured": false, 00:16:11.600 "data_offset": 0, 00:16:11.600 "data_size": 65536 00:16:11.600 }, 00:16:11.600 { 00:16:11.600 "name": "BaseBdev2", 00:16:11.600 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:11.600 "is_configured": true, 00:16:11.600 "data_offset": 0, 00:16:11.600 "data_size": 65536 00:16:11.600 }, 00:16:11.600 { 00:16:11.600 "name": "BaseBdev3", 00:16:11.600 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:11.600 "is_configured": true, 00:16:11.600 "data_offset": 0, 00:16:11.600 "data_size": 65536 00:16:11.600 }, 00:16:11.600 { 00:16:11.600 "name": "BaseBdev4", 00:16:11.600 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:11.600 "is_configured": true, 00:16:11.600 "data_offset": 0, 00:16:11.600 "data_size": 65536 00:16:11.600 } 00:16:11.600 ] 00:16:11.600 }' 00:16:11.600 20:29:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.600 20:29:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.859 20:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.859 20:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.859 20:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.859 [2024-11-26 20:29:05.339837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.859 [2024-11-26 20:29:05.343500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:16:11.859 [2024-11-26 20:29:05.345851] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.859 20:29:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.859 20:29:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.238 "name": "raid_bdev1", 00:16:13.238 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:13.238 "strip_size_kb": 64, 00:16:13.238 "state": "online", 00:16:13.238 "raid_level": "raid5f", 00:16:13.238 "superblock": false, 00:16:13.238 "num_base_bdevs": 4, 00:16:13.238 "num_base_bdevs_discovered": 4, 00:16:13.238 "num_base_bdevs_operational": 4, 00:16:13.238 "process": { 00:16:13.238 "type": "rebuild", 00:16:13.238 "target": "spare", 00:16:13.238 "progress": { 00:16:13.238 "blocks": 19200, 00:16:13.238 "percent": 9 00:16:13.238 } 00:16:13.238 }, 00:16:13.238 "base_bdevs_list": [ 00:16:13.238 { 00:16:13.238 "name": "spare", 00:16:13.238 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:13.238 "is_configured": true, 00:16:13.238 "data_offset": 0, 00:16:13.238 "data_size": 65536 00:16:13.238 }, 00:16:13.238 { 00:16:13.238 "name": "BaseBdev2", 00:16:13.238 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:13.238 "is_configured": true, 00:16:13.238 "data_offset": 0, 00:16:13.238 "data_size": 65536 00:16:13.238 }, 00:16:13.238 { 00:16:13.238 "name": "BaseBdev3", 00:16:13.238 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:13.238 "is_configured": true, 00:16:13.238 "data_offset": 0, 00:16:13.238 "data_size": 65536 00:16:13.238 }, 00:16:13.238 { 00:16:13.238 "name": "BaseBdev4", 00:16:13.238 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:13.238 "is_configured": true, 00:16:13.238 "data_offset": 0, 00:16:13.238 "data_size": 65536 00:16:13.238 } 00:16:13.238 ] 00:16:13.238 }' 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.238 [2024-11-26 20:29:06.489411] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.238 [2024-11-26 20:29:06.556593] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.238 [2024-11-26 20:29:06.556662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.238 [2024-11-26 20:29:06.556685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.238 [2024-11-26 20:29:06.556696] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.238 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.238 "name": "raid_bdev1", 00:16:13.238 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:13.238 "strip_size_kb": 64, 00:16:13.238 "state": "online", 00:16:13.238 "raid_level": "raid5f", 00:16:13.238 "superblock": false, 00:16:13.238 "num_base_bdevs": 4, 00:16:13.238 "num_base_bdevs_discovered": 3, 00:16:13.238 "num_base_bdevs_operational": 3, 00:16:13.238 "base_bdevs_list": [ 00:16:13.238 { 00:16:13.238 "name": null, 00:16:13.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.238 "is_configured": false, 00:16:13.238 "data_offset": 0, 00:16:13.238 "data_size": 65536 00:16:13.238 }, 00:16:13.238 { 00:16:13.238 "name": "BaseBdev2", 00:16:13.238 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:13.238 "is_configured": true, 00:16:13.238 "data_offset": 0, 00:16:13.238 "data_size": 65536 00:16:13.238 }, 00:16:13.238 { 00:16:13.238 "name": "BaseBdev3", 00:16:13.238 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:13.238 "is_configured": true, 00:16:13.238 "data_offset": 0, 00:16:13.238 "data_size": 65536 00:16:13.238 }, 00:16:13.238 { 00:16:13.239 "name": "BaseBdev4", 00:16:13.239 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:13.239 "is_configured": true, 00:16:13.239 "data_offset": 0, 00:16:13.239 "data_size": 65536 00:16:13.239 } 00:16:13.239 ] 00:16:13.239 }' 00:16:13.239 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.239 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.497 20:29:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.497 20:29:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.497 20:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.497 "name": "raid_bdev1", 00:16:13.497 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:13.497 "strip_size_kb": 64, 00:16:13.497 "state": "online", 00:16:13.497 "raid_level": "raid5f", 00:16:13.497 "superblock": false, 00:16:13.497 "num_base_bdevs": 4, 00:16:13.497 "num_base_bdevs_discovered": 3, 00:16:13.497 "num_base_bdevs_operational": 3, 00:16:13.497 "base_bdevs_list": [ 00:16:13.497 { 00:16:13.497 "name": null, 00:16:13.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.497 "is_configured": false, 00:16:13.497 "data_offset": 0, 00:16:13.497 "data_size": 65536 00:16:13.497 }, 00:16:13.497 { 00:16:13.497 "name": "BaseBdev2", 00:16:13.497 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:13.497 "is_configured": true, 00:16:13.497 "data_offset": 0, 00:16:13.497 "data_size": 65536 00:16:13.497 }, 00:16:13.497 { 00:16:13.497 "name": "BaseBdev3", 00:16:13.497 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:13.497 "is_configured": true, 00:16:13.497 "data_offset": 0, 00:16:13.497 "data_size": 65536 00:16:13.497 }, 00:16:13.497 { 00:16:13.497 "name": "BaseBdev4", 00:16:13.497 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:13.497 "is_configured": true, 00:16:13.497 "data_offset": 0, 00:16:13.497 "data_size": 65536 00:16:13.497 } 00:16:13.497 ] 00:16:13.497 }' 00:16:13.497 20:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.757 20:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.757 20:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.757 20:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.757 20:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.757 20:29:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.757 20:29:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.757 [2024-11-26 20:29:07.142534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.757 [2024-11-26 20:29:07.146099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:13.757 [2024-11-26 20:29:07.148543] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.757 20:29:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.757 20:29:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.692 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.692 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.692 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.692 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.692 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.692 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.692 20:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.692 20:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.693 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.693 20:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.693 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.693 "name": "raid_bdev1", 00:16:14.693 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:14.693 "strip_size_kb": 64, 00:16:14.693 "state": "online", 00:16:14.693 "raid_level": "raid5f", 00:16:14.693 "superblock": false, 00:16:14.693 "num_base_bdevs": 4, 00:16:14.693 "num_base_bdevs_discovered": 4, 00:16:14.693 "num_base_bdevs_operational": 4, 00:16:14.693 "process": { 00:16:14.693 "type": "rebuild", 00:16:14.693 "target": "spare", 00:16:14.693 "progress": { 00:16:14.693 "blocks": 19200, 00:16:14.693 "percent": 9 00:16:14.693 } 00:16:14.693 }, 00:16:14.693 "base_bdevs_list": [ 00:16:14.693 { 00:16:14.693 "name": "spare", 00:16:14.693 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:14.693 "is_configured": true, 00:16:14.693 "data_offset": 0, 00:16:14.693 "data_size": 65536 00:16:14.693 }, 00:16:14.693 { 00:16:14.693 "name": "BaseBdev2", 00:16:14.693 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:14.693 "is_configured": true, 00:16:14.693 "data_offset": 0, 00:16:14.693 "data_size": 65536 00:16:14.693 }, 00:16:14.693 { 00:16:14.693 "name": "BaseBdev3", 00:16:14.693 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:14.693 "is_configured": true, 00:16:14.693 "data_offset": 0, 00:16:14.693 "data_size": 65536 00:16:14.693 }, 00:16:14.693 { 00:16:14.693 "name": "BaseBdev4", 00:16:14.693 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:14.693 "is_configured": true, 00:16:14.693 "data_offset": 0, 00:16:14.693 "data_size": 65536 00:16:14.693 } 00:16:14.693 ] 00:16:14.693 }' 00:16:14.693 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=537 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.951 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.951 "name": "raid_bdev1", 00:16:14.951 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:14.951 "strip_size_kb": 64, 00:16:14.951 "state": "online", 00:16:14.951 "raid_level": "raid5f", 00:16:14.951 "superblock": false, 00:16:14.951 "num_base_bdevs": 4, 00:16:14.951 "num_base_bdevs_discovered": 4, 00:16:14.951 "num_base_bdevs_operational": 4, 00:16:14.951 "process": { 00:16:14.951 "type": "rebuild", 00:16:14.951 "target": "spare", 00:16:14.951 "progress": { 00:16:14.951 "blocks": 21120, 00:16:14.951 "percent": 10 00:16:14.951 } 00:16:14.951 }, 00:16:14.951 "base_bdevs_list": [ 00:16:14.951 { 00:16:14.952 "name": "spare", 00:16:14.952 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:14.952 "is_configured": true, 00:16:14.952 "data_offset": 0, 00:16:14.952 "data_size": 65536 00:16:14.952 }, 00:16:14.952 { 00:16:14.952 "name": "BaseBdev2", 00:16:14.952 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:14.952 "is_configured": true, 00:16:14.952 "data_offset": 0, 00:16:14.952 "data_size": 65536 00:16:14.952 }, 00:16:14.952 { 00:16:14.952 "name": "BaseBdev3", 00:16:14.952 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:14.952 "is_configured": true, 00:16:14.952 "data_offset": 0, 00:16:14.952 "data_size": 65536 00:16:14.952 }, 00:16:14.952 { 00:16:14.952 "name": "BaseBdev4", 00:16:14.952 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:14.952 "is_configured": true, 00:16:14.952 "data_offset": 0, 00:16:14.952 "data_size": 65536 00:16:14.952 } 00:16:14.952 ] 00:16:14.952 }' 00:16:14.952 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.952 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.952 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.952 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.952 20:29:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.886 20:29:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.145 20:29:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.145 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.145 "name": "raid_bdev1", 00:16:16.145 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:16.145 "strip_size_kb": 64, 00:16:16.145 "state": "online", 00:16:16.145 "raid_level": "raid5f", 00:16:16.145 "superblock": false, 00:16:16.145 "num_base_bdevs": 4, 00:16:16.145 "num_base_bdevs_discovered": 4, 00:16:16.145 "num_base_bdevs_operational": 4, 00:16:16.145 "process": { 00:16:16.145 "type": "rebuild", 00:16:16.145 "target": "spare", 00:16:16.145 "progress": { 00:16:16.145 "blocks": 42240, 00:16:16.145 "percent": 21 00:16:16.145 } 00:16:16.145 }, 00:16:16.145 "base_bdevs_list": [ 00:16:16.145 { 00:16:16.145 "name": "spare", 00:16:16.145 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:16.145 "is_configured": true, 00:16:16.145 "data_offset": 0, 00:16:16.145 "data_size": 65536 00:16:16.145 }, 00:16:16.145 { 00:16:16.145 "name": "BaseBdev2", 00:16:16.145 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:16.145 "is_configured": true, 00:16:16.145 "data_offset": 0, 00:16:16.145 "data_size": 65536 00:16:16.145 }, 00:16:16.145 { 00:16:16.145 "name": "BaseBdev3", 00:16:16.145 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:16.145 "is_configured": true, 00:16:16.145 "data_offset": 0, 00:16:16.145 "data_size": 65536 00:16:16.145 }, 00:16:16.145 { 00:16:16.145 "name": "BaseBdev4", 00:16:16.145 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:16.145 "is_configured": true, 00:16:16.145 "data_offset": 0, 00:16:16.145 "data_size": 65536 00:16:16.145 } 00:16:16.145 ] 00:16:16.145 }' 00:16:16.145 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.145 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.145 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.145 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.145 20:29:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.081 "name": "raid_bdev1", 00:16:17.081 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:17.081 "strip_size_kb": 64, 00:16:17.081 "state": "online", 00:16:17.081 "raid_level": "raid5f", 00:16:17.081 "superblock": false, 00:16:17.081 "num_base_bdevs": 4, 00:16:17.081 "num_base_bdevs_discovered": 4, 00:16:17.081 "num_base_bdevs_operational": 4, 00:16:17.081 "process": { 00:16:17.081 "type": "rebuild", 00:16:17.081 "target": "spare", 00:16:17.081 "progress": { 00:16:17.081 "blocks": 63360, 00:16:17.081 "percent": 32 00:16:17.081 } 00:16:17.081 }, 00:16:17.081 "base_bdevs_list": [ 00:16:17.081 { 00:16:17.081 "name": "spare", 00:16:17.081 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:17.081 "is_configured": true, 00:16:17.081 "data_offset": 0, 00:16:17.081 "data_size": 65536 00:16:17.081 }, 00:16:17.081 { 00:16:17.081 "name": "BaseBdev2", 00:16:17.081 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:17.081 "is_configured": true, 00:16:17.081 "data_offset": 0, 00:16:17.081 "data_size": 65536 00:16:17.081 }, 00:16:17.081 { 00:16:17.081 "name": "BaseBdev3", 00:16:17.081 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:17.081 "is_configured": true, 00:16:17.081 "data_offset": 0, 00:16:17.081 "data_size": 65536 00:16:17.081 }, 00:16:17.081 { 00:16:17.081 "name": "BaseBdev4", 00:16:17.081 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:17.081 "is_configured": true, 00:16:17.081 "data_offset": 0, 00:16:17.081 "data_size": 65536 00:16:17.081 } 00:16:17.081 ] 00:16:17.081 }' 00:16:17.081 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.364 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.364 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.364 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.364 20:29:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.299 "name": "raid_bdev1", 00:16:18.299 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:18.299 "strip_size_kb": 64, 00:16:18.299 "state": "online", 00:16:18.299 "raid_level": "raid5f", 00:16:18.299 "superblock": false, 00:16:18.299 "num_base_bdevs": 4, 00:16:18.299 "num_base_bdevs_discovered": 4, 00:16:18.299 "num_base_bdevs_operational": 4, 00:16:18.299 "process": { 00:16:18.299 "type": "rebuild", 00:16:18.299 "target": "spare", 00:16:18.299 "progress": { 00:16:18.299 "blocks": 86400, 00:16:18.299 "percent": 43 00:16:18.299 } 00:16:18.299 }, 00:16:18.299 "base_bdevs_list": [ 00:16:18.299 { 00:16:18.299 "name": "spare", 00:16:18.299 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:18.299 "is_configured": true, 00:16:18.299 "data_offset": 0, 00:16:18.299 "data_size": 65536 00:16:18.299 }, 00:16:18.299 { 00:16:18.299 "name": "BaseBdev2", 00:16:18.299 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:18.299 "is_configured": true, 00:16:18.299 "data_offset": 0, 00:16:18.299 "data_size": 65536 00:16:18.299 }, 00:16:18.299 { 00:16:18.299 "name": "BaseBdev3", 00:16:18.299 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:18.299 "is_configured": true, 00:16:18.299 "data_offset": 0, 00:16:18.299 "data_size": 65536 00:16:18.299 }, 00:16:18.299 { 00:16:18.299 "name": "BaseBdev4", 00:16:18.299 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:18.299 "is_configured": true, 00:16:18.299 "data_offset": 0, 00:16:18.299 "data_size": 65536 00:16:18.299 } 00:16:18.299 ] 00:16:18.299 }' 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.299 20:29:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.679 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.679 "name": "raid_bdev1", 00:16:19.679 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:19.679 "strip_size_kb": 64, 00:16:19.679 "state": "online", 00:16:19.679 "raid_level": "raid5f", 00:16:19.680 "superblock": false, 00:16:19.680 "num_base_bdevs": 4, 00:16:19.680 "num_base_bdevs_discovered": 4, 00:16:19.680 "num_base_bdevs_operational": 4, 00:16:19.680 "process": { 00:16:19.680 "type": "rebuild", 00:16:19.680 "target": "spare", 00:16:19.680 "progress": { 00:16:19.680 "blocks": 107520, 00:16:19.680 "percent": 54 00:16:19.680 } 00:16:19.680 }, 00:16:19.680 "base_bdevs_list": [ 00:16:19.680 { 00:16:19.680 "name": "spare", 00:16:19.680 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:19.680 "is_configured": true, 00:16:19.680 "data_offset": 0, 00:16:19.680 "data_size": 65536 00:16:19.680 }, 00:16:19.680 { 00:16:19.680 "name": "BaseBdev2", 00:16:19.680 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:19.680 "is_configured": true, 00:16:19.680 "data_offset": 0, 00:16:19.680 "data_size": 65536 00:16:19.680 }, 00:16:19.680 { 00:16:19.680 "name": "BaseBdev3", 00:16:19.680 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:19.680 "is_configured": true, 00:16:19.680 "data_offset": 0, 00:16:19.680 "data_size": 65536 00:16:19.680 }, 00:16:19.680 { 00:16:19.680 "name": "BaseBdev4", 00:16:19.680 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:19.680 "is_configured": true, 00:16:19.680 "data_offset": 0, 00:16:19.680 "data_size": 65536 00:16:19.680 } 00:16:19.680 ] 00:16:19.680 }' 00:16:19.680 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.680 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.680 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.680 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.680 20:29:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.616 20:29:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.616 20:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.616 "name": "raid_bdev1", 00:16:20.616 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:20.616 "strip_size_kb": 64, 00:16:20.616 "state": "online", 00:16:20.616 "raid_level": "raid5f", 00:16:20.616 "superblock": false, 00:16:20.616 "num_base_bdevs": 4, 00:16:20.616 "num_base_bdevs_discovered": 4, 00:16:20.617 "num_base_bdevs_operational": 4, 00:16:20.617 "process": { 00:16:20.617 "type": "rebuild", 00:16:20.617 "target": "spare", 00:16:20.617 "progress": { 00:16:20.617 "blocks": 128640, 00:16:20.617 "percent": 65 00:16:20.617 } 00:16:20.617 }, 00:16:20.617 "base_bdevs_list": [ 00:16:20.617 { 00:16:20.617 "name": "spare", 00:16:20.617 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:20.617 "is_configured": true, 00:16:20.617 "data_offset": 0, 00:16:20.617 "data_size": 65536 00:16:20.617 }, 00:16:20.617 { 00:16:20.617 "name": "BaseBdev2", 00:16:20.617 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:20.617 "is_configured": true, 00:16:20.617 "data_offset": 0, 00:16:20.617 "data_size": 65536 00:16:20.617 }, 00:16:20.617 { 00:16:20.617 "name": "BaseBdev3", 00:16:20.617 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:20.617 "is_configured": true, 00:16:20.617 "data_offset": 0, 00:16:20.617 "data_size": 65536 00:16:20.617 }, 00:16:20.617 { 00:16:20.617 "name": "BaseBdev4", 00:16:20.617 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:20.617 "is_configured": true, 00:16:20.617 "data_offset": 0, 00:16:20.617 "data_size": 65536 00:16:20.617 } 00:16:20.617 ] 00:16:20.617 }' 00:16:20.617 20:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.617 20:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.617 20:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.617 20:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.617 20:29:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.989 "name": "raid_bdev1", 00:16:21.989 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:21.989 "strip_size_kb": 64, 00:16:21.989 "state": "online", 00:16:21.989 "raid_level": "raid5f", 00:16:21.989 "superblock": false, 00:16:21.989 "num_base_bdevs": 4, 00:16:21.989 "num_base_bdevs_discovered": 4, 00:16:21.989 "num_base_bdevs_operational": 4, 00:16:21.989 "process": { 00:16:21.989 "type": "rebuild", 00:16:21.989 "target": "spare", 00:16:21.989 "progress": { 00:16:21.989 "blocks": 149760, 00:16:21.989 "percent": 76 00:16:21.989 } 00:16:21.989 }, 00:16:21.989 "base_bdevs_list": [ 00:16:21.989 { 00:16:21.989 "name": "spare", 00:16:21.989 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:21.989 "is_configured": true, 00:16:21.989 "data_offset": 0, 00:16:21.989 "data_size": 65536 00:16:21.989 }, 00:16:21.989 { 00:16:21.989 "name": "BaseBdev2", 00:16:21.989 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:21.989 "is_configured": true, 00:16:21.989 "data_offset": 0, 00:16:21.989 "data_size": 65536 00:16:21.989 }, 00:16:21.989 { 00:16:21.989 "name": "BaseBdev3", 00:16:21.989 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:21.989 "is_configured": true, 00:16:21.989 "data_offset": 0, 00:16:21.989 "data_size": 65536 00:16:21.989 }, 00:16:21.989 { 00:16:21.989 "name": "BaseBdev4", 00:16:21.989 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:21.989 "is_configured": true, 00:16:21.989 "data_offset": 0, 00:16:21.989 "data_size": 65536 00:16:21.989 } 00:16:21.989 ] 00:16:21.989 }' 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.989 20:29:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.933 "name": "raid_bdev1", 00:16:22.933 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:22.933 "strip_size_kb": 64, 00:16:22.933 "state": "online", 00:16:22.933 "raid_level": "raid5f", 00:16:22.933 "superblock": false, 00:16:22.933 "num_base_bdevs": 4, 00:16:22.933 "num_base_bdevs_discovered": 4, 00:16:22.933 "num_base_bdevs_operational": 4, 00:16:22.933 "process": { 00:16:22.933 "type": "rebuild", 00:16:22.933 "target": "spare", 00:16:22.933 "progress": { 00:16:22.933 "blocks": 172800, 00:16:22.933 "percent": 87 00:16:22.933 } 00:16:22.933 }, 00:16:22.933 "base_bdevs_list": [ 00:16:22.933 { 00:16:22.933 "name": "spare", 00:16:22.933 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:22.933 "is_configured": true, 00:16:22.933 "data_offset": 0, 00:16:22.933 "data_size": 65536 00:16:22.933 }, 00:16:22.933 { 00:16:22.933 "name": "BaseBdev2", 00:16:22.933 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:22.933 "is_configured": true, 00:16:22.933 "data_offset": 0, 00:16:22.933 "data_size": 65536 00:16:22.933 }, 00:16:22.933 { 00:16:22.933 "name": "BaseBdev3", 00:16:22.933 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:22.933 "is_configured": true, 00:16:22.933 "data_offset": 0, 00:16:22.933 "data_size": 65536 00:16:22.933 }, 00:16:22.933 { 00:16:22.933 "name": "BaseBdev4", 00:16:22.933 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:22.933 "is_configured": true, 00:16:22.933 "data_offset": 0, 00:16:22.933 "data_size": 65536 00:16:22.933 } 00:16:22.933 ] 00:16:22.933 }' 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.933 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.934 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.934 20:29:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.871 20:29:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.129 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.129 "name": "raid_bdev1", 00:16:24.129 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:24.129 "strip_size_kb": 64, 00:16:24.129 "state": "online", 00:16:24.129 "raid_level": "raid5f", 00:16:24.129 "superblock": false, 00:16:24.129 "num_base_bdevs": 4, 00:16:24.129 "num_base_bdevs_discovered": 4, 00:16:24.129 "num_base_bdevs_operational": 4, 00:16:24.129 "process": { 00:16:24.129 "type": "rebuild", 00:16:24.129 "target": "spare", 00:16:24.129 "progress": { 00:16:24.129 "blocks": 193920, 00:16:24.129 "percent": 98 00:16:24.129 } 00:16:24.129 }, 00:16:24.129 "base_bdevs_list": [ 00:16:24.129 { 00:16:24.129 "name": "spare", 00:16:24.129 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 0, 00:16:24.129 "data_size": 65536 00:16:24.129 }, 00:16:24.129 { 00:16:24.129 "name": "BaseBdev2", 00:16:24.129 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 0, 00:16:24.129 "data_size": 65536 00:16:24.129 }, 00:16:24.129 { 00:16:24.129 "name": "BaseBdev3", 00:16:24.129 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 0, 00:16:24.129 "data_size": 65536 00:16:24.129 }, 00:16:24.129 { 00:16:24.129 "name": "BaseBdev4", 00:16:24.129 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 0, 00:16:24.129 "data_size": 65536 00:16:24.129 } 00:16:24.129 ] 00:16:24.129 }' 00:16:24.129 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.129 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.129 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.129 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.129 20:29:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.129 [2024-11-26 20:29:17.543533] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:24.129 [2024-11-26 20:29:17.543673] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:24.129 [2024-11-26 20:29:17.543760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.068 "name": "raid_bdev1", 00:16:25.068 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:25.068 "strip_size_kb": 64, 00:16:25.068 "state": "online", 00:16:25.068 "raid_level": "raid5f", 00:16:25.068 "superblock": false, 00:16:25.068 "num_base_bdevs": 4, 00:16:25.068 "num_base_bdevs_discovered": 4, 00:16:25.068 "num_base_bdevs_operational": 4, 00:16:25.068 "base_bdevs_list": [ 00:16:25.068 { 00:16:25.068 "name": "spare", 00:16:25.068 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:25.068 "is_configured": true, 00:16:25.068 "data_offset": 0, 00:16:25.068 "data_size": 65536 00:16:25.068 }, 00:16:25.068 { 00:16:25.068 "name": "BaseBdev2", 00:16:25.068 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:25.068 "is_configured": true, 00:16:25.068 "data_offset": 0, 00:16:25.068 "data_size": 65536 00:16:25.068 }, 00:16:25.068 { 00:16:25.068 "name": "BaseBdev3", 00:16:25.068 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:25.068 "is_configured": true, 00:16:25.068 "data_offset": 0, 00:16:25.068 "data_size": 65536 00:16:25.068 }, 00:16:25.068 { 00:16:25.068 "name": "BaseBdev4", 00:16:25.068 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:25.068 "is_configured": true, 00:16:25.068 "data_offset": 0, 00:16:25.068 "data_size": 65536 00:16:25.068 } 00:16:25.068 ] 00:16:25.068 }' 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:25.068 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.328 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:25.328 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:25.328 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.328 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.328 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.328 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.328 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.328 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.329 "name": "raid_bdev1", 00:16:25.329 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:25.329 "strip_size_kb": 64, 00:16:25.329 "state": "online", 00:16:25.329 "raid_level": "raid5f", 00:16:25.329 "superblock": false, 00:16:25.329 "num_base_bdevs": 4, 00:16:25.329 "num_base_bdevs_discovered": 4, 00:16:25.329 "num_base_bdevs_operational": 4, 00:16:25.329 "base_bdevs_list": [ 00:16:25.329 { 00:16:25.329 "name": "spare", 00:16:25.329 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:25.329 "is_configured": true, 00:16:25.329 "data_offset": 0, 00:16:25.329 "data_size": 65536 00:16:25.329 }, 00:16:25.329 { 00:16:25.329 "name": "BaseBdev2", 00:16:25.329 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:25.329 "is_configured": true, 00:16:25.329 "data_offset": 0, 00:16:25.329 "data_size": 65536 00:16:25.329 }, 00:16:25.329 { 00:16:25.329 "name": "BaseBdev3", 00:16:25.329 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:25.329 "is_configured": true, 00:16:25.329 "data_offset": 0, 00:16:25.329 "data_size": 65536 00:16:25.329 }, 00:16:25.329 { 00:16:25.329 "name": "BaseBdev4", 00:16:25.329 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:25.329 "is_configured": true, 00:16:25.329 "data_offset": 0, 00:16:25.329 "data_size": 65536 00:16:25.329 } 00:16:25.329 ] 00:16:25.329 }' 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.329 "name": "raid_bdev1", 00:16:25.329 "uuid": "d4b8ba69-0e21-4ab0-b79d-900d3408c5fd", 00:16:25.329 "strip_size_kb": 64, 00:16:25.329 "state": "online", 00:16:25.329 "raid_level": "raid5f", 00:16:25.329 "superblock": false, 00:16:25.329 "num_base_bdevs": 4, 00:16:25.329 "num_base_bdevs_discovered": 4, 00:16:25.329 "num_base_bdevs_operational": 4, 00:16:25.329 "base_bdevs_list": [ 00:16:25.329 { 00:16:25.329 "name": "spare", 00:16:25.329 "uuid": "aae557ae-2292-5583-a24c-ed39e9d33f65", 00:16:25.329 "is_configured": true, 00:16:25.329 "data_offset": 0, 00:16:25.329 "data_size": 65536 00:16:25.329 }, 00:16:25.329 { 00:16:25.329 "name": "BaseBdev2", 00:16:25.329 "uuid": "f9867e7b-2fbe-5c2e-867b-2cf3460cef8c", 00:16:25.329 "is_configured": true, 00:16:25.329 "data_offset": 0, 00:16:25.329 "data_size": 65536 00:16:25.329 }, 00:16:25.329 { 00:16:25.329 "name": "BaseBdev3", 00:16:25.329 "uuid": "9158b3b1-4b5a-50e8-91e9-1dfe55f59670", 00:16:25.329 "is_configured": true, 00:16:25.329 "data_offset": 0, 00:16:25.329 "data_size": 65536 00:16:25.329 }, 00:16:25.329 { 00:16:25.329 "name": "BaseBdev4", 00:16:25.329 "uuid": "89a2b593-6c83-5478-8ed1-34082d2d720f", 00:16:25.329 "is_configured": true, 00:16:25.329 "data_offset": 0, 00:16:25.329 "data_size": 65536 00:16:25.329 } 00:16:25.329 ] 00:16:25.329 }' 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.329 20:29:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.899 [2024-11-26 20:29:19.264002] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.899 [2024-11-26 20:29:19.264039] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.899 [2024-11-26 20:29:19.264136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.899 [2024-11-26 20:29:19.264235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.899 [2024-11-26 20:29:19.264258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:25.899 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:26.160 /dev/nbd0 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.160 1+0 records in 00:16:26.160 1+0 records out 00:16:26.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359994 s, 11.4 MB/s 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.160 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:26.420 /dev/nbd1 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.420 1+0 records in 00:16:26.420 1+0 records out 00:16:26.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418711 s, 9.8 MB/s 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.420 20:29:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.680 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:26.938 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95607 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95607 ']' 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95607 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95607 00:16:26.939 killing process with pid 95607 00:16:26.939 Received shutdown signal, test time was about 60.000000 seconds 00:16:26.939 00:16:26.939 Latency(us) 00:16:26.939 [2024-11-26T20:29:20.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.939 [2024-11-26T20:29:20.491Z] =================================================================================================================== 00:16:26.939 [2024-11-26T20:29:20.491Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95607' 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95607 00:16:26.939 [2024-11-26 20:29:20.428826] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.939 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95607 00:16:27.200 [2024-11-26 20:29:20.513694] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.459 ************************************ 00:16:27.459 END TEST raid5f_rebuild_test 00:16:27.459 20:29:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:27.459 00:16:27.459 real 0m18.649s 00:16:27.460 user 0m22.613s 00:16:27.460 sys 0m2.207s 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.460 ************************************ 00:16:27.460 20:29:20 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:27.460 20:29:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:27.460 20:29:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.460 20:29:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.460 ************************************ 00:16:27.460 START TEST raid5f_rebuild_test_sb 00:16:27.460 ************************************ 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=96118 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 96118 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 96118 ']' 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.460 20:29:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.460 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:27.460 Zero copy mechanism will not be used. 00:16:27.460 [2024-11-26 20:29:20.994103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:27.460 [2024-11-26 20:29:20.994266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96118 ] 00:16:27.719 [2024-11-26 20:29:21.156241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.719 [2024-11-26 20:29:21.232178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.978 [2024-11-26 20:29:21.303869] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.978 [2024-11-26 20:29:21.303914] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 BaseBdev1_malloc 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 [2024-11-26 20:29:21.884173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:28.548 [2024-11-26 20:29:21.884248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.548 [2024-11-26 20:29:21.884298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:28.548 [2024-11-26 20:29:21.884319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.548 [2024-11-26 20:29:21.886931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.548 [2024-11-26 20:29:21.886978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:28.548 BaseBdev1 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 BaseBdev2_malloc 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 [2024-11-26 20:29:21.931841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:28.548 [2024-11-26 20:29:21.931945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.548 [2024-11-26 20:29:21.931995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:28.548 [2024-11-26 20:29:21.932020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.548 [2024-11-26 20:29:21.935538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.548 [2024-11-26 20:29:21.935590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:28.548 BaseBdev2 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 BaseBdev3_malloc 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 [2024-11-26 20:29:21.966696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:28.548 [2024-11-26 20:29:21.966760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.548 [2024-11-26 20:29:21.966798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:28.548 [2024-11-26 20:29:21.966812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.548 [2024-11-26 20:29:21.969177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.548 [2024-11-26 20:29:21.969224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:28.548 BaseBdev3 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 BaseBdev4_malloc 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 [2024-11-26 20:29:21.998378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:28.548 [2024-11-26 20:29:21.998455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.548 [2024-11-26 20:29:21.998492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:28.548 [2024-11-26 20:29:21.998505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.548 [2024-11-26 20:29:22.000895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.548 [2024-11-26 20:29:22.000934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:28.548 BaseBdev4 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 spare_malloc 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.548 spare_delay 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.548 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.549 [2024-11-26 20:29:22.045248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:28.549 [2024-11-26 20:29:22.045322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.549 [2024-11-26 20:29:22.045361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:28.549 [2024-11-26 20:29:22.045375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.549 [2024-11-26 20:29:22.047861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.549 [2024-11-26 20:29:22.047902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:28.549 spare 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.549 [2024-11-26 20:29:22.057354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.549 [2024-11-26 20:29:22.059349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.549 [2024-11-26 20:29:22.059424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.549 [2024-11-26 20:29:22.059482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:28.549 [2024-11-26 20:29:22.059707] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:28.549 [2024-11-26 20:29:22.059732] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:28.549 [2024-11-26 20:29:22.060056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:28.549 [2024-11-26 20:29:22.060601] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:28.549 [2024-11-26 20:29:22.060636] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:28.549 [2024-11-26 20:29:22.060795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.549 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.809 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.809 "name": "raid_bdev1", 00:16:28.809 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:28.809 "strip_size_kb": 64, 00:16:28.809 "state": "online", 00:16:28.809 "raid_level": "raid5f", 00:16:28.809 "superblock": true, 00:16:28.809 "num_base_bdevs": 4, 00:16:28.809 "num_base_bdevs_discovered": 4, 00:16:28.809 "num_base_bdevs_operational": 4, 00:16:28.809 "base_bdevs_list": [ 00:16:28.809 { 00:16:28.809 "name": "BaseBdev1", 00:16:28.809 "uuid": "e619a04c-b9c7-541d-91e7-0eb91c7b88a1", 00:16:28.809 "is_configured": true, 00:16:28.809 "data_offset": 2048, 00:16:28.809 "data_size": 63488 00:16:28.809 }, 00:16:28.809 { 00:16:28.809 "name": "BaseBdev2", 00:16:28.809 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:28.809 "is_configured": true, 00:16:28.809 "data_offset": 2048, 00:16:28.809 "data_size": 63488 00:16:28.809 }, 00:16:28.809 { 00:16:28.809 "name": "BaseBdev3", 00:16:28.809 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:28.809 "is_configured": true, 00:16:28.809 "data_offset": 2048, 00:16:28.809 "data_size": 63488 00:16:28.809 }, 00:16:28.809 { 00:16:28.809 "name": "BaseBdev4", 00:16:28.809 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:28.809 "is_configured": true, 00:16:28.809 "data_offset": 2048, 00:16:28.809 "data_size": 63488 00:16:28.809 } 00:16:28.809 ] 00:16:28.809 }' 00:16:28.809 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.809 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.070 [2024-11-26 20:29:22.463454] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:29.070 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:29.331 [2024-11-26 20:29:22.762839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:29.331 /dev/nbd0 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:29.331 1+0 records in 00:16:29.331 1+0 records out 00:16:29.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418841 s, 9.8 MB/s 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:29.331 20:29:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:29.932 496+0 records in 00:16:29.932 496+0 records out 00:16:29.932 97517568 bytes (98 MB, 93 MiB) copied, 0.477546 s, 204 MB/s 00:16:29.932 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:29.932 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:29.932 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:29.932 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:29.932 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:29.932 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.932 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:30.191 [2024-11-26 20:29:23.558246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.191 [2024-11-26 20:29:23.578292] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.191 "name": "raid_bdev1", 00:16:30.191 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:30.191 "strip_size_kb": 64, 00:16:30.191 "state": "online", 00:16:30.191 "raid_level": "raid5f", 00:16:30.191 "superblock": true, 00:16:30.191 "num_base_bdevs": 4, 00:16:30.191 "num_base_bdevs_discovered": 3, 00:16:30.191 "num_base_bdevs_operational": 3, 00:16:30.191 "base_bdevs_list": [ 00:16:30.191 { 00:16:30.191 "name": null, 00:16:30.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.191 "is_configured": false, 00:16:30.191 "data_offset": 0, 00:16:30.191 "data_size": 63488 00:16:30.191 }, 00:16:30.191 { 00:16:30.191 "name": "BaseBdev2", 00:16:30.191 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:30.191 "is_configured": true, 00:16:30.191 "data_offset": 2048, 00:16:30.191 "data_size": 63488 00:16:30.191 }, 00:16:30.191 { 00:16:30.191 "name": "BaseBdev3", 00:16:30.191 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:30.191 "is_configured": true, 00:16:30.191 "data_offset": 2048, 00:16:30.191 "data_size": 63488 00:16:30.191 }, 00:16:30.191 { 00:16:30.191 "name": "BaseBdev4", 00:16:30.191 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:30.191 "is_configured": true, 00:16:30.191 "data_offset": 2048, 00:16:30.191 "data_size": 63488 00:16:30.191 } 00:16:30.191 ] 00:16:30.191 }' 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.191 20:29:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.758 20:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.758 20:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.758 20:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.758 [2024-11-26 20:29:24.073556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.758 [2024-11-26 20:29:24.077295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:16:30.758 [2024-11-26 20:29:24.079689] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.758 20:29:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.758 20:29:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.694 "name": "raid_bdev1", 00:16:31.694 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:31.694 "strip_size_kb": 64, 00:16:31.694 "state": "online", 00:16:31.694 "raid_level": "raid5f", 00:16:31.694 "superblock": true, 00:16:31.694 "num_base_bdevs": 4, 00:16:31.694 "num_base_bdevs_discovered": 4, 00:16:31.694 "num_base_bdevs_operational": 4, 00:16:31.694 "process": { 00:16:31.694 "type": "rebuild", 00:16:31.694 "target": "spare", 00:16:31.694 "progress": { 00:16:31.694 "blocks": 19200, 00:16:31.694 "percent": 10 00:16:31.694 } 00:16:31.694 }, 00:16:31.694 "base_bdevs_list": [ 00:16:31.694 { 00:16:31.694 "name": "spare", 00:16:31.694 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:31.694 "is_configured": true, 00:16:31.694 "data_offset": 2048, 00:16:31.694 "data_size": 63488 00:16:31.694 }, 00:16:31.694 { 00:16:31.694 "name": "BaseBdev2", 00:16:31.694 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:31.694 "is_configured": true, 00:16:31.694 "data_offset": 2048, 00:16:31.694 "data_size": 63488 00:16:31.694 }, 00:16:31.694 { 00:16:31.694 "name": "BaseBdev3", 00:16:31.694 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:31.694 "is_configured": true, 00:16:31.694 "data_offset": 2048, 00:16:31.694 "data_size": 63488 00:16:31.694 }, 00:16:31.694 { 00:16:31.694 "name": "BaseBdev4", 00:16:31.694 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:31.694 "is_configured": true, 00:16:31.694 "data_offset": 2048, 00:16:31.694 "data_size": 63488 00:16:31.694 } 00:16:31.694 ] 00:16:31.694 }' 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.694 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.694 [2024-11-26 20:29:25.227887] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.954 [2024-11-26 20:29:25.292056] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.954 [2024-11-26 20:29:25.292137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.954 [2024-11-26 20:29:25.292166] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.954 [2024-11-26 20:29:25.292182] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.954 "name": "raid_bdev1", 00:16:31.954 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:31.954 "strip_size_kb": 64, 00:16:31.954 "state": "online", 00:16:31.954 "raid_level": "raid5f", 00:16:31.954 "superblock": true, 00:16:31.954 "num_base_bdevs": 4, 00:16:31.954 "num_base_bdevs_discovered": 3, 00:16:31.954 "num_base_bdevs_operational": 3, 00:16:31.954 "base_bdevs_list": [ 00:16:31.954 { 00:16:31.954 "name": null, 00:16:31.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.954 "is_configured": false, 00:16:31.954 "data_offset": 0, 00:16:31.954 "data_size": 63488 00:16:31.954 }, 00:16:31.954 { 00:16:31.954 "name": "BaseBdev2", 00:16:31.954 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:31.954 "is_configured": true, 00:16:31.954 "data_offset": 2048, 00:16:31.954 "data_size": 63488 00:16:31.954 }, 00:16:31.954 { 00:16:31.954 "name": "BaseBdev3", 00:16:31.954 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:31.954 "is_configured": true, 00:16:31.954 "data_offset": 2048, 00:16:31.954 "data_size": 63488 00:16:31.954 }, 00:16:31.954 { 00:16:31.954 "name": "BaseBdev4", 00:16:31.954 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:31.954 "is_configured": true, 00:16:31.954 "data_offset": 2048, 00:16:31.954 "data_size": 63488 00:16:31.954 } 00:16:31.954 ] 00:16:31.954 }' 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.954 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.212 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.493 "name": "raid_bdev1", 00:16:32.493 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:32.493 "strip_size_kb": 64, 00:16:32.493 "state": "online", 00:16:32.493 "raid_level": "raid5f", 00:16:32.493 "superblock": true, 00:16:32.493 "num_base_bdevs": 4, 00:16:32.493 "num_base_bdevs_discovered": 3, 00:16:32.493 "num_base_bdevs_operational": 3, 00:16:32.493 "base_bdevs_list": [ 00:16:32.493 { 00:16:32.493 "name": null, 00:16:32.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.493 "is_configured": false, 00:16:32.493 "data_offset": 0, 00:16:32.493 "data_size": 63488 00:16:32.493 }, 00:16:32.493 { 00:16:32.493 "name": "BaseBdev2", 00:16:32.493 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:32.493 "is_configured": true, 00:16:32.493 "data_offset": 2048, 00:16:32.493 "data_size": 63488 00:16:32.493 }, 00:16:32.493 { 00:16:32.493 "name": "BaseBdev3", 00:16:32.493 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:32.493 "is_configured": true, 00:16:32.493 "data_offset": 2048, 00:16:32.493 "data_size": 63488 00:16:32.493 }, 00:16:32.493 { 00:16:32.493 "name": "BaseBdev4", 00:16:32.493 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:32.493 "is_configured": true, 00:16:32.493 "data_offset": 2048, 00:16:32.493 "data_size": 63488 00:16:32.493 } 00:16:32.493 ] 00:16:32.493 }' 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.493 [2024-11-26 20:29:25.910792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.493 [2024-11-26 20:29:25.914456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:16:32.493 [2024-11-26 20:29:25.917164] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.493 20:29:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.431 "name": "raid_bdev1", 00:16:33.431 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:33.431 "strip_size_kb": 64, 00:16:33.431 "state": "online", 00:16:33.431 "raid_level": "raid5f", 00:16:33.431 "superblock": true, 00:16:33.431 "num_base_bdevs": 4, 00:16:33.431 "num_base_bdevs_discovered": 4, 00:16:33.431 "num_base_bdevs_operational": 4, 00:16:33.431 "process": { 00:16:33.431 "type": "rebuild", 00:16:33.431 "target": "spare", 00:16:33.431 "progress": { 00:16:33.431 "blocks": 19200, 00:16:33.431 "percent": 10 00:16:33.431 } 00:16:33.431 }, 00:16:33.431 "base_bdevs_list": [ 00:16:33.431 { 00:16:33.431 "name": "spare", 00:16:33.431 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:33.431 "is_configured": true, 00:16:33.431 "data_offset": 2048, 00:16:33.431 "data_size": 63488 00:16:33.431 }, 00:16:33.431 { 00:16:33.431 "name": "BaseBdev2", 00:16:33.431 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:33.431 "is_configured": true, 00:16:33.431 "data_offset": 2048, 00:16:33.431 "data_size": 63488 00:16:33.431 }, 00:16:33.431 { 00:16:33.431 "name": "BaseBdev3", 00:16:33.431 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:33.431 "is_configured": true, 00:16:33.431 "data_offset": 2048, 00:16:33.431 "data_size": 63488 00:16:33.431 }, 00:16:33.431 { 00:16:33.431 "name": "BaseBdev4", 00:16:33.431 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:33.431 "is_configured": true, 00:16:33.431 "data_offset": 2048, 00:16:33.431 "data_size": 63488 00:16:33.431 } 00:16:33.431 ] 00:16:33.431 }' 00:16:33.431 20:29:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:33.690 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=556 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.690 "name": "raid_bdev1", 00:16:33.690 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:33.690 "strip_size_kb": 64, 00:16:33.690 "state": "online", 00:16:33.690 "raid_level": "raid5f", 00:16:33.690 "superblock": true, 00:16:33.690 "num_base_bdevs": 4, 00:16:33.690 "num_base_bdevs_discovered": 4, 00:16:33.690 "num_base_bdevs_operational": 4, 00:16:33.690 "process": { 00:16:33.690 "type": "rebuild", 00:16:33.690 "target": "spare", 00:16:33.690 "progress": { 00:16:33.690 "blocks": 21120, 00:16:33.690 "percent": 11 00:16:33.690 } 00:16:33.690 }, 00:16:33.690 "base_bdevs_list": [ 00:16:33.690 { 00:16:33.690 "name": "spare", 00:16:33.690 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:33.690 "is_configured": true, 00:16:33.690 "data_offset": 2048, 00:16:33.690 "data_size": 63488 00:16:33.690 }, 00:16:33.690 { 00:16:33.690 "name": "BaseBdev2", 00:16:33.690 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:33.690 "is_configured": true, 00:16:33.690 "data_offset": 2048, 00:16:33.690 "data_size": 63488 00:16:33.690 }, 00:16:33.690 { 00:16:33.690 "name": "BaseBdev3", 00:16:33.690 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:33.690 "is_configured": true, 00:16:33.690 "data_offset": 2048, 00:16:33.690 "data_size": 63488 00:16:33.690 }, 00:16:33.690 { 00:16:33.690 "name": "BaseBdev4", 00:16:33.690 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:33.690 "is_configured": true, 00:16:33.690 "data_offset": 2048, 00:16:33.690 "data_size": 63488 00:16:33.690 } 00:16:33.690 ] 00:16:33.690 }' 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.690 20:29:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.069 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.069 "name": "raid_bdev1", 00:16:35.069 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:35.069 "strip_size_kb": 64, 00:16:35.069 "state": "online", 00:16:35.069 "raid_level": "raid5f", 00:16:35.069 "superblock": true, 00:16:35.069 "num_base_bdevs": 4, 00:16:35.069 "num_base_bdevs_discovered": 4, 00:16:35.069 "num_base_bdevs_operational": 4, 00:16:35.069 "process": { 00:16:35.069 "type": "rebuild", 00:16:35.069 "target": "spare", 00:16:35.069 "progress": { 00:16:35.069 "blocks": 42240, 00:16:35.069 "percent": 22 00:16:35.070 } 00:16:35.070 }, 00:16:35.070 "base_bdevs_list": [ 00:16:35.070 { 00:16:35.070 "name": "spare", 00:16:35.070 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:35.070 "is_configured": true, 00:16:35.070 "data_offset": 2048, 00:16:35.070 "data_size": 63488 00:16:35.070 }, 00:16:35.070 { 00:16:35.070 "name": "BaseBdev2", 00:16:35.070 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:35.070 "is_configured": true, 00:16:35.070 "data_offset": 2048, 00:16:35.070 "data_size": 63488 00:16:35.070 }, 00:16:35.070 { 00:16:35.070 "name": "BaseBdev3", 00:16:35.070 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:35.070 "is_configured": true, 00:16:35.070 "data_offset": 2048, 00:16:35.070 "data_size": 63488 00:16:35.070 }, 00:16:35.070 { 00:16:35.070 "name": "BaseBdev4", 00:16:35.070 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:35.070 "is_configured": true, 00:16:35.070 "data_offset": 2048, 00:16:35.070 "data_size": 63488 00:16:35.070 } 00:16:35.070 ] 00:16:35.070 }' 00:16:35.070 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.070 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.070 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.070 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.070 20:29:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.007 "name": "raid_bdev1", 00:16:36.007 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:36.007 "strip_size_kb": 64, 00:16:36.007 "state": "online", 00:16:36.007 "raid_level": "raid5f", 00:16:36.007 "superblock": true, 00:16:36.007 "num_base_bdevs": 4, 00:16:36.007 "num_base_bdevs_discovered": 4, 00:16:36.007 "num_base_bdevs_operational": 4, 00:16:36.007 "process": { 00:16:36.007 "type": "rebuild", 00:16:36.007 "target": "spare", 00:16:36.007 "progress": { 00:16:36.007 "blocks": 65280, 00:16:36.007 "percent": 34 00:16:36.007 } 00:16:36.007 }, 00:16:36.007 "base_bdevs_list": [ 00:16:36.007 { 00:16:36.007 "name": "spare", 00:16:36.007 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:36.007 "is_configured": true, 00:16:36.007 "data_offset": 2048, 00:16:36.007 "data_size": 63488 00:16:36.007 }, 00:16:36.007 { 00:16:36.007 "name": "BaseBdev2", 00:16:36.007 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:36.007 "is_configured": true, 00:16:36.007 "data_offset": 2048, 00:16:36.007 "data_size": 63488 00:16:36.007 }, 00:16:36.007 { 00:16:36.007 "name": "BaseBdev3", 00:16:36.007 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:36.007 "is_configured": true, 00:16:36.007 "data_offset": 2048, 00:16:36.007 "data_size": 63488 00:16:36.007 }, 00:16:36.007 { 00:16:36.007 "name": "BaseBdev4", 00:16:36.007 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:36.007 "is_configured": true, 00:16:36.007 "data_offset": 2048, 00:16:36.007 "data_size": 63488 00:16:36.007 } 00:16:36.007 ] 00:16:36.007 }' 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.007 20:29:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.412 "name": "raid_bdev1", 00:16:37.412 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:37.412 "strip_size_kb": 64, 00:16:37.412 "state": "online", 00:16:37.412 "raid_level": "raid5f", 00:16:37.412 "superblock": true, 00:16:37.412 "num_base_bdevs": 4, 00:16:37.412 "num_base_bdevs_discovered": 4, 00:16:37.412 "num_base_bdevs_operational": 4, 00:16:37.412 "process": { 00:16:37.412 "type": "rebuild", 00:16:37.412 "target": "spare", 00:16:37.412 "progress": { 00:16:37.412 "blocks": 86400, 00:16:37.412 "percent": 45 00:16:37.412 } 00:16:37.412 }, 00:16:37.412 "base_bdevs_list": [ 00:16:37.412 { 00:16:37.412 "name": "spare", 00:16:37.412 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:37.412 "is_configured": true, 00:16:37.412 "data_offset": 2048, 00:16:37.412 "data_size": 63488 00:16:37.412 }, 00:16:37.412 { 00:16:37.412 "name": "BaseBdev2", 00:16:37.412 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:37.412 "is_configured": true, 00:16:37.412 "data_offset": 2048, 00:16:37.412 "data_size": 63488 00:16:37.412 }, 00:16:37.412 { 00:16:37.412 "name": "BaseBdev3", 00:16:37.412 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:37.412 "is_configured": true, 00:16:37.412 "data_offset": 2048, 00:16:37.412 "data_size": 63488 00:16:37.412 }, 00:16:37.412 { 00:16:37.412 "name": "BaseBdev4", 00:16:37.412 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:37.412 "is_configured": true, 00:16:37.412 "data_offset": 2048, 00:16:37.412 "data_size": 63488 00:16:37.412 } 00:16:37.412 ] 00:16:37.412 }' 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.412 20:29:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.350 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.350 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.350 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.350 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.350 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.350 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.351 "name": "raid_bdev1", 00:16:38.351 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:38.351 "strip_size_kb": 64, 00:16:38.351 "state": "online", 00:16:38.351 "raid_level": "raid5f", 00:16:38.351 "superblock": true, 00:16:38.351 "num_base_bdevs": 4, 00:16:38.351 "num_base_bdevs_discovered": 4, 00:16:38.351 "num_base_bdevs_operational": 4, 00:16:38.351 "process": { 00:16:38.351 "type": "rebuild", 00:16:38.351 "target": "spare", 00:16:38.351 "progress": { 00:16:38.351 "blocks": 109440, 00:16:38.351 "percent": 57 00:16:38.351 } 00:16:38.351 }, 00:16:38.351 "base_bdevs_list": [ 00:16:38.351 { 00:16:38.351 "name": "spare", 00:16:38.351 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:38.351 "is_configured": true, 00:16:38.351 "data_offset": 2048, 00:16:38.351 "data_size": 63488 00:16:38.351 }, 00:16:38.351 { 00:16:38.351 "name": "BaseBdev2", 00:16:38.351 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:38.351 "is_configured": true, 00:16:38.351 "data_offset": 2048, 00:16:38.351 "data_size": 63488 00:16:38.351 }, 00:16:38.351 { 00:16:38.351 "name": "BaseBdev3", 00:16:38.351 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:38.351 "is_configured": true, 00:16:38.351 "data_offset": 2048, 00:16:38.351 "data_size": 63488 00:16:38.351 }, 00:16:38.351 { 00:16:38.351 "name": "BaseBdev4", 00:16:38.351 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:38.351 "is_configured": true, 00:16:38.351 "data_offset": 2048, 00:16:38.351 "data_size": 63488 00:16:38.351 } 00:16:38.351 ] 00:16:38.351 }' 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.351 20:29:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.729 "name": "raid_bdev1", 00:16:39.729 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:39.729 "strip_size_kb": 64, 00:16:39.729 "state": "online", 00:16:39.729 "raid_level": "raid5f", 00:16:39.729 "superblock": true, 00:16:39.729 "num_base_bdevs": 4, 00:16:39.729 "num_base_bdevs_discovered": 4, 00:16:39.729 "num_base_bdevs_operational": 4, 00:16:39.729 "process": { 00:16:39.729 "type": "rebuild", 00:16:39.729 "target": "spare", 00:16:39.729 "progress": { 00:16:39.729 "blocks": 130560, 00:16:39.729 "percent": 68 00:16:39.729 } 00:16:39.729 }, 00:16:39.729 "base_bdevs_list": [ 00:16:39.729 { 00:16:39.729 "name": "spare", 00:16:39.729 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:39.729 "is_configured": true, 00:16:39.729 "data_offset": 2048, 00:16:39.729 "data_size": 63488 00:16:39.729 }, 00:16:39.729 { 00:16:39.729 "name": "BaseBdev2", 00:16:39.729 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:39.729 "is_configured": true, 00:16:39.729 "data_offset": 2048, 00:16:39.729 "data_size": 63488 00:16:39.729 }, 00:16:39.729 { 00:16:39.729 "name": "BaseBdev3", 00:16:39.729 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:39.729 "is_configured": true, 00:16:39.729 "data_offset": 2048, 00:16:39.729 "data_size": 63488 00:16:39.729 }, 00:16:39.729 { 00:16:39.729 "name": "BaseBdev4", 00:16:39.729 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:39.729 "is_configured": true, 00:16:39.729 "data_offset": 2048, 00:16:39.729 "data_size": 63488 00:16:39.729 } 00:16:39.729 ] 00:16:39.729 }' 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.729 20:29:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.737 "name": "raid_bdev1", 00:16:40.737 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:40.737 "strip_size_kb": 64, 00:16:40.737 "state": "online", 00:16:40.737 "raid_level": "raid5f", 00:16:40.737 "superblock": true, 00:16:40.737 "num_base_bdevs": 4, 00:16:40.737 "num_base_bdevs_discovered": 4, 00:16:40.737 "num_base_bdevs_operational": 4, 00:16:40.737 "process": { 00:16:40.737 "type": "rebuild", 00:16:40.737 "target": "spare", 00:16:40.737 "progress": { 00:16:40.737 "blocks": 153600, 00:16:40.737 "percent": 80 00:16:40.737 } 00:16:40.737 }, 00:16:40.737 "base_bdevs_list": [ 00:16:40.737 { 00:16:40.737 "name": "spare", 00:16:40.737 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:40.737 "is_configured": true, 00:16:40.737 "data_offset": 2048, 00:16:40.737 "data_size": 63488 00:16:40.737 }, 00:16:40.737 { 00:16:40.737 "name": "BaseBdev2", 00:16:40.737 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:40.737 "is_configured": true, 00:16:40.737 "data_offset": 2048, 00:16:40.737 "data_size": 63488 00:16:40.737 }, 00:16:40.737 { 00:16:40.737 "name": "BaseBdev3", 00:16:40.737 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:40.737 "is_configured": true, 00:16:40.737 "data_offset": 2048, 00:16:40.737 "data_size": 63488 00:16:40.737 }, 00:16:40.737 { 00:16:40.737 "name": "BaseBdev4", 00:16:40.737 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:40.737 "is_configured": true, 00:16:40.737 "data_offset": 2048, 00:16:40.737 "data_size": 63488 00:16:40.737 } 00:16:40.737 ] 00:16:40.737 }' 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.737 20:29:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.706 "name": "raid_bdev1", 00:16:41.706 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:41.706 "strip_size_kb": 64, 00:16:41.706 "state": "online", 00:16:41.706 "raid_level": "raid5f", 00:16:41.706 "superblock": true, 00:16:41.706 "num_base_bdevs": 4, 00:16:41.706 "num_base_bdevs_discovered": 4, 00:16:41.706 "num_base_bdevs_operational": 4, 00:16:41.706 "process": { 00:16:41.706 "type": "rebuild", 00:16:41.706 "target": "spare", 00:16:41.706 "progress": { 00:16:41.706 "blocks": 174720, 00:16:41.706 "percent": 91 00:16:41.706 } 00:16:41.706 }, 00:16:41.706 "base_bdevs_list": [ 00:16:41.706 { 00:16:41.706 "name": "spare", 00:16:41.706 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:41.706 "is_configured": true, 00:16:41.706 "data_offset": 2048, 00:16:41.706 "data_size": 63488 00:16:41.706 }, 00:16:41.706 { 00:16:41.706 "name": "BaseBdev2", 00:16:41.706 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:41.706 "is_configured": true, 00:16:41.706 "data_offset": 2048, 00:16:41.706 "data_size": 63488 00:16:41.706 }, 00:16:41.706 { 00:16:41.706 "name": "BaseBdev3", 00:16:41.706 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:41.706 "is_configured": true, 00:16:41.706 "data_offset": 2048, 00:16:41.706 "data_size": 63488 00:16:41.706 }, 00:16:41.706 { 00:16:41.706 "name": "BaseBdev4", 00:16:41.706 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:41.706 "is_configured": true, 00:16:41.706 "data_offset": 2048, 00:16:41.706 "data_size": 63488 00:16:41.706 } 00:16:41.706 ] 00:16:41.706 }' 00:16:41.706 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.965 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.965 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.965 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.965 20:29:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.530 [2024-11-26 20:29:36.018294] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:42.530 [2024-11-26 20:29:36.018532] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:42.530 [2024-11-26 20:29:36.018800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.790 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.048 "name": "raid_bdev1", 00:16:43.048 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:43.048 "strip_size_kb": 64, 00:16:43.048 "state": "online", 00:16:43.048 "raid_level": "raid5f", 00:16:43.048 "superblock": true, 00:16:43.048 "num_base_bdevs": 4, 00:16:43.048 "num_base_bdevs_discovered": 4, 00:16:43.048 "num_base_bdevs_operational": 4, 00:16:43.048 "base_bdevs_list": [ 00:16:43.048 { 00:16:43.048 "name": "spare", 00:16:43.048 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 2048, 00:16:43.048 "data_size": 63488 00:16:43.048 }, 00:16:43.048 { 00:16:43.048 "name": "BaseBdev2", 00:16:43.048 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 2048, 00:16:43.048 "data_size": 63488 00:16:43.048 }, 00:16:43.048 { 00:16:43.048 "name": "BaseBdev3", 00:16:43.048 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 2048, 00:16:43.048 "data_size": 63488 00:16:43.048 }, 00:16:43.048 { 00:16:43.048 "name": "BaseBdev4", 00:16:43.048 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 2048, 00:16:43.048 "data_size": 63488 00:16:43.048 } 00:16:43.048 ] 00:16:43.048 }' 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.048 "name": "raid_bdev1", 00:16:43.048 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:43.048 "strip_size_kb": 64, 00:16:43.048 "state": "online", 00:16:43.048 "raid_level": "raid5f", 00:16:43.048 "superblock": true, 00:16:43.048 "num_base_bdevs": 4, 00:16:43.048 "num_base_bdevs_discovered": 4, 00:16:43.048 "num_base_bdevs_operational": 4, 00:16:43.048 "base_bdevs_list": [ 00:16:43.048 { 00:16:43.048 "name": "spare", 00:16:43.048 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 2048, 00:16:43.048 "data_size": 63488 00:16:43.048 }, 00:16:43.048 { 00:16:43.048 "name": "BaseBdev2", 00:16:43.048 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 2048, 00:16:43.048 "data_size": 63488 00:16:43.048 }, 00:16:43.048 { 00:16:43.048 "name": "BaseBdev3", 00:16:43.048 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 2048, 00:16:43.048 "data_size": 63488 00:16:43.048 }, 00:16:43.048 { 00:16:43.048 "name": "BaseBdev4", 00:16:43.048 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:43.048 "is_configured": true, 00:16:43.048 "data_offset": 2048, 00:16:43.048 "data_size": 63488 00:16:43.048 } 00:16:43.048 ] 00:16:43.048 }' 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.048 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.307 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.307 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.307 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.307 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.307 "name": "raid_bdev1", 00:16:43.307 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:43.307 "strip_size_kb": 64, 00:16:43.307 "state": "online", 00:16:43.307 "raid_level": "raid5f", 00:16:43.307 "superblock": true, 00:16:43.307 "num_base_bdevs": 4, 00:16:43.307 "num_base_bdevs_discovered": 4, 00:16:43.307 "num_base_bdevs_operational": 4, 00:16:43.307 "base_bdevs_list": [ 00:16:43.307 { 00:16:43.307 "name": "spare", 00:16:43.307 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:43.307 "is_configured": true, 00:16:43.307 "data_offset": 2048, 00:16:43.307 "data_size": 63488 00:16:43.307 }, 00:16:43.307 { 00:16:43.307 "name": "BaseBdev2", 00:16:43.307 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:43.307 "is_configured": true, 00:16:43.307 "data_offset": 2048, 00:16:43.307 "data_size": 63488 00:16:43.307 }, 00:16:43.307 { 00:16:43.307 "name": "BaseBdev3", 00:16:43.307 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:43.307 "is_configured": true, 00:16:43.307 "data_offset": 2048, 00:16:43.307 "data_size": 63488 00:16:43.307 }, 00:16:43.307 { 00:16:43.307 "name": "BaseBdev4", 00:16:43.307 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:43.307 "is_configured": true, 00:16:43.307 "data_offset": 2048, 00:16:43.307 "data_size": 63488 00:16:43.307 } 00:16:43.307 ] 00:16:43.307 }' 00:16:43.307 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.307 20:29:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.565 [2024-11-26 20:29:37.029875] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.565 [2024-11-26 20:29:37.029962] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.565 [2024-11-26 20:29:37.030086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.565 [2024-11-26 20:29:37.030214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.565 [2024-11-26 20:29:37.030284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:43.565 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:43.824 /dev/nbd0 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.824 1+0 records in 00:16:43.824 1+0 records out 00:16:43.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023828 s, 17.2 MB/s 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:43.824 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:44.082 /dev/nbd1 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.082 1+0 records in 00:16:44.082 1+0 records out 00:16:44.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284848 s, 14.4 MB/s 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.082 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:44.340 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:44.340 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.340 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.340 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.340 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:44.340 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.340 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.599 20:29:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.858 [2024-11-26 20:29:38.206190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:44.858 [2024-11-26 20:29:38.206277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.858 [2024-11-26 20:29:38.206300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:44.858 [2024-11-26 20:29:38.206313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.858 [2024-11-26 20:29:38.208829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.858 [2024-11-26 20:29:38.208876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:44.858 [2024-11-26 20:29:38.208977] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:44.858 [2024-11-26 20:29:38.209049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.858 [2024-11-26 20:29:38.209194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.858 [2024-11-26 20:29:38.209293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.858 [2024-11-26 20:29:38.209362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.858 spare 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.858 [2024-11-26 20:29:38.309287] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:44.858 [2024-11-26 20:29:38.309336] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.858 [2024-11-26 20:29:38.309823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:16:44.858 [2024-11-26 20:29:38.310471] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:44.858 [2024-11-26 20:29:38.310529] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:44.858 [2024-11-26 20:29:38.310815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.858 "name": "raid_bdev1", 00:16:44.858 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:44.858 "strip_size_kb": 64, 00:16:44.858 "state": "online", 00:16:44.858 "raid_level": "raid5f", 00:16:44.858 "superblock": true, 00:16:44.858 "num_base_bdevs": 4, 00:16:44.858 "num_base_bdevs_discovered": 4, 00:16:44.858 "num_base_bdevs_operational": 4, 00:16:44.858 "base_bdevs_list": [ 00:16:44.858 { 00:16:44.858 "name": "spare", 00:16:44.858 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:44.858 "is_configured": true, 00:16:44.858 "data_offset": 2048, 00:16:44.858 "data_size": 63488 00:16:44.858 }, 00:16:44.858 { 00:16:44.858 "name": "BaseBdev2", 00:16:44.858 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:44.858 "is_configured": true, 00:16:44.858 "data_offset": 2048, 00:16:44.858 "data_size": 63488 00:16:44.858 }, 00:16:44.858 { 00:16:44.858 "name": "BaseBdev3", 00:16:44.858 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:44.858 "is_configured": true, 00:16:44.858 "data_offset": 2048, 00:16:44.858 "data_size": 63488 00:16:44.858 }, 00:16:44.858 { 00:16:44.858 "name": "BaseBdev4", 00:16:44.858 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:44.858 "is_configured": true, 00:16:44.858 "data_offset": 2048, 00:16:44.858 "data_size": 63488 00:16:44.858 } 00:16:44.858 ] 00:16:44.858 }' 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.858 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.426 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.427 "name": "raid_bdev1", 00:16:45.427 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:45.427 "strip_size_kb": 64, 00:16:45.427 "state": "online", 00:16:45.427 "raid_level": "raid5f", 00:16:45.427 "superblock": true, 00:16:45.427 "num_base_bdevs": 4, 00:16:45.427 "num_base_bdevs_discovered": 4, 00:16:45.427 "num_base_bdevs_operational": 4, 00:16:45.427 "base_bdevs_list": [ 00:16:45.427 { 00:16:45.427 "name": "spare", 00:16:45.427 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:45.427 "is_configured": true, 00:16:45.427 "data_offset": 2048, 00:16:45.427 "data_size": 63488 00:16:45.427 }, 00:16:45.427 { 00:16:45.427 "name": "BaseBdev2", 00:16:45.427 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:45.427 "is_configured": true, 00:16:45.427 "data_offset": 2048, 00:16:45.427 "data_size": 63488 00:16:45.427 }, 00:16:45.427 { 00:16:45.427 "name": "BaseBdev3", 00:16:45.427 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:45.427 "is_configured": true, 00:16:45.427 "data_offset": 2048, 00:16:45.427 "data_size": 63488 00:16:45.427 }, 00:16:45.427 { 00:16:45.427 "name": "BaseBdev4", 00:16:45.427 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:45.427 "is_configured": true, 00:16:45.427 "data_offset": 2048, 00:16:45.427 "data_size": 63488 00:16:45.427 } 00:16:45.427 ] 00:16:45.427 }' 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.427 [2024-11-26 20:29:38.961694] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.427 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.686 20:29:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.686 20:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.686 "name": "raid_bdev1", 00:16:45.686 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:45.686 "strip_size_kb": 64, 00:16:45.686 "state": "online", 00:16:45.686 "raid_level": "raid5f", 00:16:45.686 "superblock": true, 00:16:45.686 "num_base_bdevs": 4, 00:16:45.686 "num_base_bdevs_discovered": 3, 00:16:45.686 "num_base_bdevs_operational": 3, 00:16:45.686 "base_bdevs_list": [ 00:16:45.686 { 00:16:45.686 "name": null, 00:16:45.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.686 "is_configured": false, 00:16:45.686 "data_offset": 0, 00:16:45.686 "data_size": 63488 00:16:45.686 }, 00:16:45.686 { 00:16:45.686 "name": "BaseBdev2", 00:16:45.686 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:45.686 "is_configured": true, 00:16:45.686 "data_offset": 2048, 00:16:45.686 "data_size": 63488 00:16:45.686 }, 00:16:45.686 { 00:16:45.686 "name": "BaseBdev3", 00:16:45.686 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:45.686 "is_configured": true, 00:16:45.686 "data_offset": 2048, 00:16:45.686 "data_size": 63488 00:16:45.686 }, 00:16:45.686 { 00:16:45.686 "name": "BaseBdev4", 00:16:45.686 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:45.686 "is_configured": true, 00:16:45.686 "data_offset": 2048, 00:16:45.686 "data_size": 63488 00:16:45.686 } 00:16:45.686 ] 00:16:45.686 }' 00:16:45.686 20:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.686 20:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.945 20:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:45.945 20:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.945 20:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.945 [2024-11-26 20:29:39.456882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.945 [2024-11-26 20:29:39.457197] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.945 [2024-11-26 20:29:39.457219] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:45.945 [2024-11-26 20:29:39.457286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.945 [2024-11-26 20:29:39.460650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:16:45.945 [2024-11-26 20:29:39.462989] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.945 20:29:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.945 20:29:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.323 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.324 "name": "raid_bdev1", 00:16:47.324 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:47.324 "strip_size_kb": 64, 00:16:47.324 "state": "online", 00:16:47.324 "raid_level": "raid5f", 00:16:47.324 "superblock": true, 00:16:47.324 "num_base_bdevs": 4, 00:16:47.324 "num_base_bdevs_discovered": 4, 00:16:47.324 "num_base_bdevs_operational": 4, 00:16:47.324 "process": { 00:16:47.324 "type": "rebuild", 00:16:47.324 "target": "spare", 00:16:47.324 "progress": { 00:16:47.324 "blocks": 19200, 00:16:47.324 "percent": 10 00:16:47.324 } 00:16:47.324 }, 00:16:47.324 "base_bdevs_list": [ 00:16:47.324 { 00:16:47.324 "name": "spare", 00:16:47.324 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:47.324 "is_configured": true, 00:16:47.324 "data_offset": 2048, 00:16:47.324 "data_size": 63488 00:16:47.324 }, 00:16:47.324 { 00:16:47.324 "name": "BaseBdev2", 00:16:47.324 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:47.324 "is_configured": true, 00:16:47.324 "data_offset": 2048, 00:16:47.324 "data_size": 63488 00:16:47.324 }, 00:16:47.324 { 00:16:47.324 "name": "BaseBdev3", 00:16:47.324 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:47.324 "is_configured": true, 00:16:47.324 "data_offset": 2048, 00:16:47.324 "data_size": 63488 00:16:47.324 }, 00:16:47.324 { 00:16:47.324 "name": "BaseBdev4", 00:16:47.324 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:47.324 "is_configured": true, 00:16:47.324 "data_offset": 2048, 00:16:47.324 "data_size": 63488 00:16:47.324 } 00:16:47.324 ] 00:16:47.324 }' 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.324 [2024-11-26 20:29:40.630719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.324 [2024-11-26 20:29:40.674778] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.324 [2024-11-26 20:29:40.674968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.324 [2024-11-26 20:29:40.675016] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.324 [2024-11-26 20:29:40.675042] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.324 "name": "raid_bdev1", 00:16:47.324 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:47.324 "strip_size_kb": 64, 00:16:47.324 "state": "online", 00:16:47.324 "raid_level": "raid5f", 00:16:47.324 "superblock": true, 00:16:47.324 "num_base_bdevs": 4, 00:16:47.324 "num_base_bdevs_discovered": 3, 00:16:47.324 "num_base_bdevs_operational": 3, 00:16:47.324 "base_bdevs_list": [ 00:16:47.324 { 00:16:47.324 "name": null, 00:16:47.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.324 "is_configured": false, 00:16:47.324 "data_offset": 0, 00:16:47.324 "data_size": 63488 00:16:47.324 }, 00:16:47.324 { 00:16:47.324 "name": "BaseBdev2", 00:16:47.324 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:47.324 "is_configured": true, 00:16:47.324 "data_offset": 2048, 00:16:47.324 "data_size": 63488 00:16:47.324 }, 00:16:47.324 { 00:16:47.324 "name": "BaseBdev3", 00:16:47.324 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:47.324 "is_configured": true, 00:16:47.324 "data_offset": 2048, 00:16:47.324 "data_size": 63488 00:16:47.324 }, 00:16:47.324 { 00:16:47.324 "name": "BaseBdev4", 00:16:47.324 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:47.324 "is_configured": true, 00:16:47.324 "data_offset": 2048, 00:16:47.324 "data_size": 63488 00:16:47.324 } 00:16:47.324 ] 00:16:47.324 }' 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.324 20:29:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.892 20:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:47.892 20:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.892 20:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.892 [2024-11-26 20:29:41.169115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:47.892 [2024-11-26 20:29:41.169245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.892 [2024-11-26 20:29:41.169295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:47.892 [2024-11-26 20:29:41.169346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.892 [2024-11-26 20:29:41.169906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.892 [2024-11-26 20:29:41.169930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:47.892 [2024-11-26 20:29:41.170032] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:47.892 [2024-11-26 20:29:41.170047] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.892 [2024-11-26 20:29:41.170063] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:47.892 [2024-11-26 20:29:41.170095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.892 spare 00:16:47.892 [2024-11-26 20:29:41.173653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:47.892 20:29:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.892 20:29:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:47.892 [2024-11-26 20:29:41.176272] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.829 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.830 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.830 "name": "raid_bdev1", 00:16:48.830 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:48.830 "strip_size_kb": 64, 00:16:48.830 "state": "online", 00:16:48.830 "raid_level": "raid5f", 00:16:48.830 "superblock": true, 00:16:48.830 "num_base_bdevs": 4, 00:16:48.830 "num_base_bdevs_discovered": 4, 00:16:48.830 "num_base_bdevs_operational": 4, 00:16:48.830 "process": { 00:16:48.830 "type": "rebuild", 00:16:48.830 "target": "spare", 00:16:48.830 "progress": { 00:16:48.830 "blocks": 19200, 00:16:48.830 "percent": 10 00:16:48.830 } 00:16:48.830 }, 00:16:48.830 "base_bdevs_list": [ 00:16:48.830 { 00:16:48.830 "name": "spare", 00:16:48.830 "uuid": "f98de0eb-0963-50bc-a96c-be510c4214c6", 00:16:48.830 "is_configured": true, 00:16:48.830 "data_offset": 2048, 00:16:48.830 "data_size": 63488 00:16:48.830 }, 00:16:48.830 { 00:16:48.830 "name": "BaseBdev2", 00:16:48.830 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:48.830 "is_configured": true, 00:16:48.830 "data_offset": 2048, 00:16:48.830 "data_size": 63488 00:16:48.830 }, 00:16:48.830 { 00:16:48.830 "name": "BaseBdev3", 00:16:48.830 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:48.830 "is_configured": true, 00:16:48.830 "data_offset": 2048, 00:16:48.830 "data_size": 63488 00:16:48.830 }, 00:16:48.830 { 00:16:48.830 "name": "BaseBdev4", 00:16:48.830 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:48.830 "is_configured": true, 00:16:48.830 "data_offset": 2048, 00:16:48.830 "data_size": 63488 00:16:48.830 } 00:16:48.830 ] 00:16:48.830 }' 00:16:48.830 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.830 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.830 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.830 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.830 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:48.830 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.830 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.830 [2024-11-26 20:29:42.342024] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.093 [2024-11-26 20:29:42.388251] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:49.093 [2024-11-26 20:29:42.388454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.093 [2024-11-26 20:29:42.388510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:49.093 [2024-11-26 20:29:42.388542] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.093 "name": "raid_bdev1", 00:16:49.093 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:49.093 "strip_size_kb": 64, 00:16:49.093 "state": "online", 00:16:49.093 "raid_level": "raid5f", 00:16:49.093 "superblock": true, 00:16:49.093 "num_base_bdevs": 4, 00:16:49.093 "num_base_bdevs_discovered": 3, 00:16:49.093 "num_base_bdevs_operational": 3, 00:16:49.093 "base_bdevs_list": [ 00:16:49.093 { 00:16:49.093 "name": null, 00:16:49.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.093 "is_configured": false, 00:16:49.093 "data_offset": 0, 00:16:49.093 "data_size": 63488 00:16:49.093 }, 00:16:49.093 { 00:16:49.093 "name": "BaseBdev2", 00:16:49.093 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:49.093 "is_configured": true, 00:16:49.093 "data_offset": 2048, 00:16:49.093 "data_size": 63488 00:16:49.093 }, 00:16:49.093 { 00:16:49.093 "name": "BaseBdev3", 00:16:49.093 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:49.093 "is_configured": true, 00:16:49.093 "data_offset": 2048, 00:16:49.093 "data_size": 63488 00:16:49.093 }, 00:16:49.093 { 00:16:49.093 "name": "BaseBdev4", 00:16:49.093 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:49.093 "is_configured": true, 00:16:49.093 "data_offset": 2048, 00:16:49.093 "data_size": 63488 00:16:49.093 } 00:16:49.093 ] 00:16:49.093 }' 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.093 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.357 "name": "raid_bdev1", 00:16:49.357 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:49.357 "strip_size_kb": 64, 00:16:49.357 "state": "online", 00:16:49.357 "raid_level": "raid5f", 00:16:49.357 "superblock": true, 00:16:49.357 "num_base_bdevs": 4, 00:16:49.357 "num_base_bdevs_discovered": 3, 00:16:49.357 "num_base_bdevs_operational": 3, 00:16:49.357 "base_bdevs_list": [ 00:16:49.357 { 00:16:49.357 "name": null, 00:16:49.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.357 "is_configured": false, 00:16:49.357 "data_offset": 0, 00:16:49.357 "data_size": 63488 00:16:49.357 }, 00:16:49.357 { 00:16:49.357 "name": "BaseBdev2", 00:16:49.357 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:49.357 "is_configured": true, 00:16:49.357 "data_offset": 2048, 00:16:49.357 "data_size": 63488 00:16:49.357 }, 00:16:49.357 { 00:16:49.357 "name": "BaseBdev3", 00:16:49.357 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:49.357 "is_configured": true, 00:16:49.357 "data_offset": 2048, 00:16:49.357 "data_size": 63488 00:16:49.357 }, 00:16:49.357 { 00:16:49.357 "name": "BaseBdev4", 00:16:49.357 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:49.357 "is_configured": true, 00:16:49.357 "data_offset": 2048, 00:16:49.357 "data_size": 63488 00:16:49.357 } 00:16:49.357 ] 00:16:49.357 }' 00:16:49.357 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.616 20:29:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.616 [2024-11-26 20:29:43.002536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.616 [2024-11-26 20:29:43.002641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.616 [2024-11-26 20:29:43.002671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:49.616 [2024-11-26 20:29:43.002686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.616 [2024-11-26 20:29:43.003269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.616 [2024-11-26 20:29:43.003304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.616 [2024-11-26 20:29:43.003410] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:49.616 [2024-11-26 20:29:43.003434] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:49.616 [2024-11-26 20:29:43.003455] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:49.616 [2024-11-26 20:29:43.003473] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:49.616 BaseBdev1 00:16:49.616 20:29:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.616 20:29:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.555 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.555 "name": "raid_bdev1", 00:16:50.555 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:50.555 "strip_size_kb": 64, 00:16:50.555 "state": "online", 00:16:50.555 "raid_level": "raid5f", 00:16:50.555 "superblock": true, 00:16:50.555 "num_base_bdevs": 4, 00:16:50.555 "num_base_bdevs_discovered": 3, 00:16:50.555 "num_base_bdevs_operational": 3, 00:16:50.555 "base_bdevs_list": [ 00:16:50.555 { 00:16:50.555 "name": null, 00:16:50.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.555 "is_configured": false, 00:16:50.555 "data_offset": 0, 00:16:50.555 "data_size": 63488 00:16:50.555 }, 00:16:50.555 { 00:16:50.555 "name": "BaseBdev2", 00:16:50.555 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:50.555 "is_configured": true, 00:16:50.555 "data_offset": 2048, 00:16:50.555 "data_size": 63488 00:16:50.555 }, 00:16:50.555 { 00:16:50.555 "name": "BaseBdev3", 00:16:50.555 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:50.555 "is_configured": true, 00:16:50.555 "data_offset": 2048, 00:16:50.555 "data_size": 63488 00:16:50.555 }, 00:16:50.555 { 00:16:50.555 "name": "BaseBdev4", 00:16:50.555 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:50.555 "is_configured": true, 00:16:50.555 "data_offset": 2048, 00:16:50.556 "data_size": 63488 00:16:50.556 } 00:16:50.556 ] 00:16:50.556 }' 00:16:50.556 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.556 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.123 "name": "raid_bdev1", 00:16:51.123 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:51.123 "strip_size_kb": 64, 00:16:51.123 "state": "online", 00:16:51.123 "raid_level": "raid5f", 00:16:51.123 "superblock": true, 00:16:51.123 "num_base_bdevs": 4, 00:16:51.123 "num_base_bdevs_discovered": 3, 00:16:51.123 "num_base_bdevs_operational": 3, 00:16:51.123 "base_bdevs_list": [ 00:16:51.123 { 00:16:51.123 "name": null, 00:16:51.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.123 "is_configured": false, 00:16:51.123 "data_offset": 0, 00:16:51.123 "data_size": 63488 00:16:51.123 }, 00:16:51.123 { 00:16:51.123 "name": "BaseBdev2", 00:16:51.123 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:51.123 "is_configured": true, 00:16:51.123 "data_offset": 2048, 00:16:51.123 "data_size": 63488 00:16:51.123 }, 00:16:51.123 { 00:16:51.123 "name": "BaseBdev3", 00:16:51.123 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:51.123 "is_configured": true, 00:16:51.123 "data_offset": 2048, 00:16:51.123 "data_size": 63488 00:16:51.123 }, 00:16:51.123 { 00:16:51.123 "name": "BaseBdev4", 00:16:51.123 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:51.123 "is_configured": true, 00:16:51.123 "data_offset": 2048, 00:16:51.123 "data_size": 63488 00:16:51.123 } 00:16:51.123 ] 00:16:51.123 }' 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.123 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.123 [2024-11-26 20:29:44.615842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.123 [2024-11-26 20:29:44.616073] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:51.123 [2024-11-26 20:29:44.616096] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:51.124 request: 00:16:51.124 { 00:16:51.124 "base_bdev": "BaseBdev1", 00:16:51.124 "raid_bdev": "raid_bdev1", 00:16:51.124 "method": "bdev_raid_add_base_bdev", 00:16:51.124 "req_id": 1 00:16:51.124 } 00:16:51.124 Got JSON-RPC error response 00:16:51.124 response: 00:16:51.124 { 00:16:51.124 "code": -22, 00:16:51.124 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:51.124 } 00:16:51.124 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:51.124 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:51.124 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:51.124 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:51.124 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:51.124 20:29:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.502 "name": "raid_bdev1", 00:16:52.502 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:52.502 "strip_size_kb": 64, 00:16:52.502 "state": "online", 00:16:52.502 "raid_level": "raid5f", 00:16:52.502 "superblock": true, 00:16:52.502 "num_base_bdevs": 4, 00:16:52.502 "num_base_bdevs_discovered": 3, 00:16:52.502 "num_base_bdevs_operational": 3, 00:16:52.502 "base_bdevs_list": [ 00:16:52.502 { 00:16:52.502 "name": null, 00:16:52.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.502 "is_configured": false, 00:16:52.502 "data_offset": 0, 00:16:52.502 "data_size": 63488 00:16:52.502 }, 00:16:52.502 { 00:16:52.502 "name": "BaseBdev2", 00:16:52.502 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:52.502 "is_configured": true, 00:16:52.502 "data_offset": 2048, 00:16:52.502 "data_size": 63488 00:16:52.502 }, 00:16:52.502 { 00:16:52.502 "name": "BaseBdev3", 00:16:52.502 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:52.502 "is_configured": true, 00:16:52.502 "data_offset": 2048, 00:16:52.502 "data_size": 63488 00:16:52.502 }, 00:16:52.502 { 00:16:52.502 "name": "BaseBdev4", 00:16:52.502 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:52.502 "is_configured": true, 00:16:52.502 "data_offset": 2048, 00:16:52.502 "data_size": 63488 00:16:52.502 } 00:16:52.502 ] 00:16:52.502 }' 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.502 20:29:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.763 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.763 "name": "raid_bdev1", 00:16:52.763 "uuid": "da309304-c7b6-4505-aa93-7bfe38ccd18b", 00:16:52.763 "strip_size_kb": 64, 00:16:52.763 "state": "online", 00:16:52.763 "raid_level": "raid5f", 00:16:52.763 "superblock": true, 00:16:52.763 "num_base_bdevs": 4, 00:16:52.763 "num_base_bdevs_discovered": 3, 00:16:52.763 "num_base_bdevs_operational": 3, 00:16:52.763 "base_bdevs_list": [ 00:16:52.763 { 00:16:52.763 "name": null, 00:16:52.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.763 "is_configured": false, 00:16:52.763 "data_offset": 0, 00:16:52.763 "data_size": 63488 00:16:52.763 }, 00:16:52.763 { 00:16:52.763 "name": "BaseBdev2", 00:16:52.763 "uuid": "d5f38afd-b0cd-53bf-9da8-f025ed4744d3", 00:16:52.764 "is_configured": true, 00:16:52.764 "data_offset": 2048, 00:16:52.764 "data_size": 63488 00:16:52.764 }, 00:16:52.764 { 00:16:52.764 "name": "BaseBdev3", 00:16:52.764 "uuid": "8cbd9f2f-7044-5c9a-ba0c-52988a612c36", 00:16:52.764 "is_configured": true, 00:16:52.764 "data_offset": 2048, 00:16:52.764 "data_size": 63488 00:16:52.764 }, 00:16:52.764 { 00:16:52.764 "name": "BaseBdev4", 00:16:52.764 "uuid": "346d5618-aadb-5f7d-8d60-8e6596423cbb", 00:16:52.764 "is_configured": true, 00:16:52.764 "data_offset": 2048, 00:16:52.764 "data_size": 63488 00:16:52.764 } 00:16:52.764 ] 00:16:52.764 }' 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 96118 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 96118 ']' 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 96118 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96118 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96118' 00:16:52.764 killing process with pid 96118 00:16:52.764 Received shutdown signal, test time was about 60.000000 seconds 00:16:52.764 00:16:52.764 Latency(us) 00:16:52.764 [2024-11-26T20:29:46.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.764 [2024-11-26T20:29:46.316Z] =================================================================================================================== 00:16:52.764 [2024-11-26T20:29:46.316Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 96118 00:16:52.764 [2024-11-26 20:29:46.231300] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.764 [2024-11-26 20:29:46.231453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.764 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 96118 00:16:52.764 [2024-11-26 20:29:46.231539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.764 [2024-11-26 20:29:46.231550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:53.023 [2024-11-26 20:29:46.316613] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.281 ************************************ 00:16:53.281 END TEST raid5f_rebuild_test_sb 00:16:53.281 ************************************ 00:16:53.281 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:53.281 00:16:53.281 real 0m25.739s 00:16:53.281 user 0m32.765s 00:16:53.281 sys 0m3.134s 00:16:53.281 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.281 20:29:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.281 20:29:46 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:53.281 20:29:46 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:53.281 20:29:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:53.281 20:29:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.281 20:29:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.281 ************************************ 00:16:53.281 START TEST raid_state_function_test_sb_4k 00:16:53.281 ************************************ 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96913 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:53.281 Process raid pid: 96913 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96913' 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96913 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96913 ']' 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.281 20:29:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.281 [2024-11-26 20:29:46.819095] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:53.281 [2024-11-26 20:29:46.819324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.543 [2024-11-26 20:29:46.963113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.543 [2024-11-26 20:29:47.040361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.802 [2024-11-26 20:29:47.111976] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.802 [2024-11-26 20:29:47.112094] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.368 [2024-11-26 20:29:47.673505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.368 [2024-11-26 20:29:47.673568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.368 [2024-11-26 20:29:47.673582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.368 [2024-11-26 20:29:47.673593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.368 "name": "Existed_Raid", 00:16:54.368 "uuid": "18b501d4-9d11-4df2-af1a-89ff029fd180", 00:16:54.368 "strip_size_kb": 0, 00:16:54.368 "state": "configuring", 00:16:54.368 "raid_level": "raid1", 00:16:54.368 "superblock": true, 00:16:54.368 "num_base_bdevs": 2, 00:16:54.368 "num_base_bdevs_discovered": 0, 00:16:54.368 "num_base_bdevs_operational": 2, 00:16:54.368 "base_bdevs_list": [ 00:16:54.368 { 00:16:54.368 "name": "BaseBdev1", 00:16:54.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.368 "is_configured": false, 00:16:54.368 "data_offset": 0, 00:16:54.368 "data_size": 0 00:16:54.368 }, 00:16:54.368 { 00:16:54.368 "name": "BaseBdev2", 00:16:54.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.368 "is_configured": false, 00:16:54.368 "data_offset": 0, 00:16:54.368 "data_size": 0 00:16:54.368 } 00:16:54.368 ] 00:16:54.368 }' 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.368 20:29:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.626 [2024-11-26 20:29:48.100715] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.626 [2024-11-26 20:29:48.100826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.626 [2024-11-26 20:29:48.112725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.626 [2024-11-26 20:29:48.112809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.626 [2024-11-26 20:29:48.112841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.626 [2024-11-26 20:29:48.112865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.626 [2024-11-26 20:29:48.134768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.626 BaseBdev1 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.626 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:54.627 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.627 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.627 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.627 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.627 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.627 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.627 [ 00:16:54.627 { 00:16:54.627 "name": "BaseBdev1", 00:16:54.627 "aliases": [ 00:16:54.627 "a293109f-b076-4465-9d7a-3fffd6908dec" 00:16:54.627 ], 00:16:54.627 "product_name": "Malloc disk", 00:16:54.627 "block_size": 4096, 00:16:54.627 "num_blocks": 8192, 00:16:54.627 "uuid": "a293109f-b076-4465-9d7a-3fffd6908dec", 00:16:54.627 "assigned_rate_limits": { 00:16:54.627 "rw_ios_per_sec": 0, 00:16:54.627 "rw_mbytes_per_sec": 0, 00:16:54.627 "r_mbytes_per_sec": 0, 00:16:54.627 "w_mbytes_per_sec": 0 00:16:54.627 }, 00:16:54.627 "claimed": true, 00:16:54.627 "claim_type": "exclusive_write", 00:16:54.627 "zoned": false, 00:16:54.627 "supported_io_types": { 00:16:54.627 "read": true, 00:16:54.627 "write": true, 00:16:54.627 "unmap": true, 00:16:54.627 "flush": true, 00:16:54.627 "reset": true, 00:16:54.627 "nvme_admin": false, 00:16:54.627 "nvme_io": false, 00:16:54.627 "nvme_io_md": false, 00:16:54.627 "write_zeroes": true, 00:16:54.627 "zcopy": true, 00:16:54.627 "get_zone_info": false, 00:16:54.627 "zone_management": false, 00:16:54.627 "zone_append": false, 00:16:54.627 "compare": false, 00:16:54.627 "compare_and_write": false, 00:16:54.627 "abort": true, 00:16:54.627 "seek_hole": false, 00:16:54.627 "seek_data": false, 00:16:54.627 "copy": true, 00:16:54.627 "nvme_iov_md": false 00:16:54.627 }, 00:16:54.627 "memory_domains": [ 00:16:54.627 { 00:16:54.627 "dma_device_id": "system", 00:16:54.627 "dma_device_type": 1 00:16:54.627 }, 00:16:54.627 { 00:16:54.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.627 "dma_device_type": 2 00:16:54.627 } 00:16:54.627 ], 00:16:54.627 "driver_specific": {} 00:16:54.627 } 00:16:54.627 ] 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.884 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.884 "name": "Existed_Raid", 00:16:54.884 "uuid": "7dbdd967-32a8-48c4-ad66-026347dfed4f", 00:16:54.884 "strip_size_kb": 0, 00:16:54.884 "state": "configuring", 00:16:54.884 "raid_level": "raid1", 00:16:54.884 "superblock": true, 00:16:54.884 "num_base_bdevs": 2, 00:16:54.885 "num_base_bdevs_discovered": 1, 00:16:54.885 "num_base_bdevs_operational": 2, 00:16:54.885 "base_bdevs_list": [ 00:16:54.885 { 00:16:54.885 "name": "BaseBdev1", 00:16:54.885 "uuid": "a293109f-b076-4465-9d7a-3fffd6908dec", 00:16:54.885 "is_configured": true, 00:16:54.885 "data_offset": 256, 00:16:54.885 "data_size": 7936 00:16:54.885 }, 00:16:54.885 { 00:16:54.885 "name": "BaseBdev2", 00:16:54.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.885 "is_configured": false, 00:16:54.885 "data_offset": 0, 00:16:54.885 "data_size": 0 00:16:54.885 } 00:16:54.885 ] 00:16:54.885 }' 00:16:54.885 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.885 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.143 [2024-11-26 20:29:48.629982] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.143 [2024-11-26 20:29:48.630095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.143 [2024-11-26 20:29:48.638005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.143 [2024-11-26 20:29:48.639939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.143 [2024-11-26 20:29:48.640023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.143 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.144 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.401 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.401 "name": "Existed_Raid", 00:16:55.401 "uuid": "b8322008-6775-4188-88b0-84853fdfd828", 00:16:55.401 "strip_size_kb": 0, 00:16:55.401 "state": "configuring", 00:16:55.401 "raid_level": "raid1", 00:16:55.401 "superblock": true, 00:16:55.401 "num_base_bdevs": 2, 00:16:55.401 "num_base_bdevs_discovered": 1, 00:16:55.401 "num_base_bdevs_operational": 2, 00:16:55.401 "base_bdevs_list": [ 00:16:55.401 { 00:16:55.401 "name": "BaseBdev1", 00:16:55.401 "uuid": "a293109f-b076-4465-9d7a-3fffd6908dec", 00:16:55.401 "is_configured": true, 00:16:55.401 "data_offset": 256, 00:16:55.401 "data_size": 7936 00:16:55.401 }, 00:16:55.401 { 00:16:55.401 "name": "BaseBdev2", 00:16:55.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.401 "is_configured": false, 00:16:55.401 "data_offset": 0, 00:16:55.401 "data_size": 0 00:16:55.401 } 00:16:55.401 ] 00:16:55.401 }' 00:16:55.401 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.401 20:29:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.697 [2024-11-26 20:29:49.109371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.697 [2024-11-26 20:29:49.109768] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:55.697 [2024-11-26 20:29:49.109846] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:55.697 BaseBdev2 00:16:55.697 [2024-11-26 20:29:49.110290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:55.697 [2024-11-26 20:29:49.110492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:55.697 [2024-11-26 20:29:49.110581] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:55.697 [2024-11-26 20:29:49.110881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.697 [ 00:16:55.697 { 00:16:55.697 "name": "BaseBdev2", 00:16:55.697 "aliases": [ 00:16:55.697 "b551d049-46ef-429c-9357-d50faaaa9733" 00:16:55.697 ], 00:16:55.697 "product_name": "Malloc disk", 00:16:55.697 "block_size": 4096, 00:16:55.697 "num_blocks": 8192, 00:16:55.697 "uuid": "b551d049-46ef-429c-9357-d50faaaa9733", 00:16:55.697 "assigned_rate_limits": { 00:16:55.697 "rw_ios_per_sec": 0, 00:16:55.697 "rw_mbytes_per_sec": 0, 00:16:55.697 "r_mbytes_per_sec": 0, 00:16:55.697 "w_mbytes_per_sec": 0 00:16:55.697 }, 00:16:55.697 "claimed": true, 00:16:55.697 "claim_type": "exclusive_write", 00:16:55.697 "zoned": false, 00:16:55.697 "supported_io_types": { 00:16:55.697 "read": true, 00:16:55.697 "write": true, 00:16:55.697 "unmap": true, 00:16:55.697 "flush": true, 00:16:55.697 "reset": true, 00:16:55.697 "nvme_admin": false, 00:16:55.697 "nvme_io": false, 00:16:55.697 "nvme_io_md": false, 00:16:55.697 "write_zeroes": true, 00:16:55.697 "zcopy": true, 00:16:55.697 "get_zone_info": false, 00:16:55.697 "zone_management": false, 00:16:55.697 "zone_append": false, 00:16:55.697 "compare": false, 00:16:55.697 "compare_and_write": false, 00:16:55.697 "abort": true, 00:16:55.697 "seek_hole": false, 00:16:55.697 "seek_data": false, 00:16:55.697 "copy": true, 00:16:55.697 "nvme_iov_md": false 00:16:55.697 }, 00:16:55.697 "memory_domains": [ 00:16:55.697 { 00:16:55.697 "dma_device_id": "system", 00:16:55.697 "dma_device_type": 1 00:16:55.697 }, 00:16:55.697 { 00:16:55.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.697 "dma_device_type": 2 00:16:55.697 } 00:16:55.697 ], 00:16:55.697 "driver_specific": {} 00:16:55.697 } 00:16:55.697 ] 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.697 "name": "Existed_Raid", 00:16:55.697 "uuid": "b8322008-6775-4188-88b0-84853fdfd828", 00:16:55.697 "strip_size_kb": 0, 00:16:55.697 "state": "online", 00:16:55.697 "raid_level": "raid1", 00:16:55.697 "superblock": true, 00:16:55.697 "num_base_bdevs": 2, 00:16:55.697 "num_base_bdevs_discovered": 2, 00:16:55.697 "num_base_bdevs_operational": 2, 00:16:55.697 "base_bdevs_list": [ 00:16:55.697 { 00:16:55.697 "name": "BaseBdev1", 00:16:55.697 "uuid": "a293109f-b076-4465-9d7a-3fffd6908dec", 00:16:55.697 "is_configured": true, 00:16:55.697 "data_offset": 256, 00:16:55.697 "data_size": 7936 00:16:55.697 }, 00:16:55.697 { 00:16:55.697 "name": "BaseBdev2", 00:16:55.697 "uuid": "b551d049-46ef-429c-9357-d50faaaa9733", 00:16:55.697 "is_configured": true, 00:16:55.697 "data_offset": 256, 00:16:55.697 "data_size": 7936 00:16:55.697 } 00:16:55.697 ] 00:16:55.697 }' 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.697 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.262 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.262 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:56.262 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.262 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.262 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.262 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.263 [2024-11-26 20:29:49.605029] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.263 "name": "Existed_Raid", 00:16:56.263 "aliases": [ 00:16:56.263 "b8322008-6775-4188-88b0-84853fdfd828" 00:16:56.263 ], 00:16:56.263 "product_name": "Raid Volume", 00:16:56.263 "block_size": 4096, 00:16:56.263 "num_blocks": 7936, 00:16:56.263 "uuid": "b8322008-6775-4188-88b0-84853fdfd828", 00:16:56.263 "assigned_rate_limits": { 00:16:56.263 "rw_ios_per_sec": 0, 00:16:56.263 "rw_mbytes_per_sec": 0, 00:16:56.263 "r_mbytes_per_sec": 0, 00:16:56.263 "w_mbytes_per_sec": 0 00:16:56.263 }, 00:16:56.263 "claimed": false, 00:16:56.263 "zoned": false, 00:16:56.263 "supported_io_types": { 00:16:56.263 "read": true, 00:16:56.263 "write": true, 00:16:56.263 "unmap": false, 00:16:56.263 "flush": false, 00:16:56.263 "reset": true, 00:16:56.263 "nvme_admin": false, 00:16:56.263 "nvme_io": false, 00:16:56.263 "nvme_io_md": false, 00:16:56.263 "write_zeroes": true, 00:16:56.263 "zcopy": false, 00:16:56.263 "get_zone_info": false, 00:16:56.263 "zone_management": false, 00:16:56.263 "zone_append": false, 00:16:56.263 "compare": false, 00:16:56.263 "compare_and_write": false, 00:16:56.263 "abort": false, 00:16:56.263 "seek_hole": false, 00:16:56.263 "seek_data": false, 00:16:56.263 "copy": false, 00:16:56.263 "nvme_iov_md": false 00:16:56.263 }, 00:16:56.263 "memory_domains": [ 00:16:56.263 { 00:16:56.263 "dma_device_id": "system", 00:16:56.263 "dma_device_type": 1 00:16:56.263 }, 00:16:56.263 { 00:16:56.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.263 "dma_device_type": 2 00:16:56.263 }, 00:16:56.263 { 00:16:56.263 "dma_device_id": "system", 00:16:56.263 "dma_device_type": 1 00:16:56.263 }, 00:16:56.263 { 00:16:56.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.263 "dma_device_type": 2 00:16:56.263 } 00:16:56.263 ], 00:16:56.263 "driver_specific": { 00:16:56.263 "raid": { 00:16:56.263 "uuid": "b8322008-6775-4188-88b0-84853fdfd828", 00:16:56.263 "strip_size_kb": 0, 00:16:56.263 "state": "online", 00:16:56.263 "raid_level": "raid1", 00:16:56.263 "superblock": true, 00:16:56.263 "num_base_bdevs": 2, 00:16:56.263 "num_base_bdevs_discovered": 2, 00:16:56.263 "num_base_bdevs_operational": 2, 00:16:56.263 "base_bdevs_list": [ 00:16:56.263 { 00:16:56.263 "name": "BaseBdev1", 00:16:56.263 "uuid": "a293109f-b076-4465-9d7a-3fffd6908dec", 00:16:56.263 "is_configured": true, 00:16:56.263 "data_offset": 256, 00:16:56.263 "data_size": 7936 00:16:56.263 }, 00:16:56.263 { 00:16:56.263 "name": "BaseBdev2", 00:16:56.263 "uuid": "b551d049-46ef-429c-9357-d50faaaa9733", 00:16:56.263 "is_configured": true, 00:16:56.263 "data_offset": 256, 00:16:56.263 "data_size": 7936 00:16:56.263 } 00:16:56.263 ] 00:16:56.263 } 00:16:56.263 } 00:16:56.263 }' 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:56.263 BaseBdev2' 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.263 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.521 [2024-11-26 20:29:49.840292] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.521 "name": "Existed_Raid", 00:16:56.521 "uuid": "b8322008-6775-4188-88b0-84853fdfd828", 00:16:56.521 "strip_size_kb": 0, 00:16:56.521 "state": "online", 00:16:56.521 "raid_level": "raid1", 00:16:56.521 "superblock": true, 00:16:56.521 "num_base_bdevs": 2, 00:16:56.521 "num_base_bdevs_discovered": 1, 00:16:56.521 "num_base_bdevs_operational": 1, 00:16:56.521 "base_bdevs_list": [ 00:16:56.521 { 00:16:56.521 "name": null, 00:16:56.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.521 "is_configured": false, 00:16:56.521 "data_offset": 0, 00:16:56.521 "data_size": 7936 00:16:56.521 }, 00:16:56.521 { 00:16:56.521 "name": "BaseBdev2", 00:16:56.521 "uuid": "b551d049-46ef-429c-9357-d50faaaa9733", 00:16:56.521 "is_configured": true, 00:16:56.521 "data_offset": 256, 00:16:56.521 "data_size": 7936 00:16:56.521 } 00:16:56.521 ] 00:16:56.521 }' 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.521 20:29:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.779 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:56.779 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.779 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:56.779 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.779 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.779 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.035 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.035 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.035 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.035 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.036 [2024-11-26 20:29:50.372401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.036 [2024-11-26 20:29:50.372522] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.036 [2024-11-26 20:29:50.394662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.036 [2024-11-26 20:29:50.394719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.036 [2024-11-26 20:29:50.394748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96913 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96913 ']' 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96913 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96913 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.036 killing process with pid 96913 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96913' 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96913 00:16:57.036 [2024-11-26 20:29:50.488092] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.036 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96913 00:16:57.036 [2024-11-26 20:29:50.489778] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.600 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:57.600 00:16:57.600 real 0m4.127s 00:16:57.600 user 0m6.335s 00:16:57.600 sys 0m0.891s 00:16:57.600 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.600 20:29:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.600 ************************************ 00:16:57.600 END TEST raid_state_function_test_sb_4k 00:16:57.600 ************************************ 00:16:57.600 20:29:50 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:57.600 20:29:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:57.600 20:29:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.600 20:29:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.600 ************************************ 00:16:57.600 START TEST raid_superblock_test_4k 00:16:57.600 ************************************ 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=97154 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 97154 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 97154 ']' 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.600 20:29:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.600 [2024-11-26 20:29:51.010662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:57.600 [2024-11-26 20:29:51.010836] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97154 ] 00:16:57.886 [2024-11-26 20:29:51.173947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.886 [2024-11-26 20:29:51.256782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.886 [2024-11-26 20:29:51.335503] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.886 [2024-11-26 20:29:51.335545] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:58.450 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.451 malloc1 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.451 [2024-11-26 20:29:51.928605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.451 [2024-11-26 20:29:51.928720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.451 [2024-11-26 20:29:51.928745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:58.451 [2024-11-26 20:29:51.928763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.451 [2024-11-26 20:29:51.931259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.451 [2024-11-26 20:29:51.931299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.451 pt1 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.451 malloc2 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.451 [2024-11-26 20:29:51.970510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.451 [2024-11-26 20:29:51.970590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.451 [2024-11-26 20:29:51.970629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:58.451 [2024-11-26 20:29:51.970645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.451 [2024-11-26 20:29:51.973460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.451 [2024-11-26 20:29:51.973507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.451 pt2 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.451 [2024-11-26 20:29:51.982509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.451 [2024-11-26 20:29:51.984694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.451 [2024-11-26 20:29:51.984853] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:58.451 [2024-11-26 20:29:51.984870] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:58.451 [2024-11-26 20:29:51.985191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:58.451 [2024-11-26 20:29:51.985364] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:58.451 [2024-11-26 20:29:51.985376] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:58.451 [2024-11-26 20:29:51.985536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.451 20:29:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.708 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.708 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.708 "name": "raid_bdev1", 00:16:58.708 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:16:58.708 "strip_size_kb": 0, 00:16:58.708 "state": "online", 00:16:58.708 "raid_level": "raid1", 00:16:58.708 "superblock": true, 00:16:58.708 "num_base_bdevs": 2, 00:16:58.708 "num_base_bdevs_discovered": 2, 00:16:58.708 "num_base_bdevs_operational": 2, 00:16:58.708 "base_bdevs_list": [ 00:16:58.708 { 00:16:58.708 "name": "pt1", 00:16:58.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.708 "is_configured": true, 00:16:58.708 "data_offset": 256, 00:16:58.708 "data_size": 7936 00:16:58.708 }, 00:16:58.708 { 00:16:58.708 "name": "pt2", 00:16:58.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.708 "is_configured": true, 00:16:58.708 "data_offset": 256, 00:16:58.708 "data_size": 7936 00:16:58.708 } 00:16:58.708 ] 00:16:58.708 }' 00:16:58.708 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.708 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.965 [2024-11-26 20:29:52.450160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.965 "name": "raid_bdev1", 00:16:58.965 "aliases": [ 00:16:58.965 "950cee67-d450-4145-800a-9e277a958d2b" 00:16:58.965 ], 00:16:58.965 "product_name": "Raid Volume", 00:16:58.965 "block_size": 4096, 00:16:58.965 "num_blocks": 7936, 00:16:58.965 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:16:58.965 "assigned_rate_limits": { 00:16:58.965 "rw_ios_per_sec": 0, 00:16:58.965 "rw_mbytes_per_sec": 0, 00:16:58.965 "r_mbytes_per_sec": 0, 00:16:58.965 "w_mbytes_per_sec": 0 00:16:58.965 }, 00:16:58.965 "claimed": false, 00:16:58.965 "zoned": false, 00:16:58.965 "supported_io_types": { 00:16:58.965 "read": true, 00:16:58.965 "write": true, 00:16:58.965 "unmap": false, 00:16:58.965 "flush": false, 00:16:58.965 "reset": true, 00:16:58.965 "nvme_admin": false, 00:16:58.965 "nvme_io": false, 00:16:58.965 "nvme_io_md": false, 00:16:58.965 "write_zeroes": true, 00:16:58.965 "zcopy": false, 00:16:58.965 "get_zone_info": false, 00:16:58.965 "zone_management": false, 00:16:58.965 "zone_append": false, 00:16:58.965 "compare": false, 00:16:58.965 "compare_and_write": false, 00:16:58.965 "abort": false, 00:16:58.965 "seek_hole": false, 00:16:58.965 "seek_data": false, 00:16:58.965 "copy": false, 00:16:58.965 "nvme_iov_md": false 00:16:58.965 }, 00:16:58.965 "memory_domains": [ 00:16:58.965 { 00:16:58.965 "dma_device_id": "system", 00:16:58.965 "dma_device_type": 1 00:16:58.965 }, 00:16:58.965 { 00:16:58.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.965 "dma_device_type": 2 00:16:58.965 }, 00:16:58.965 { 00:16:58.965 "dma_device_id": "system", 00:16:58.965 "dma_device_type": 1 00:16:58.965 }, 00:16:58.965 { 00:16:58.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.965 "dma_device_type": 2 00:16:58.965 } 00:16:58.965 ], 00:16:58.965 "driver_specific": { 00:16:58.965 "raid": { 00:16:58.965 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:16:58.965 "strip_size_kb": 0, 00:16:58.965 "state": "online", 00:16:58.965 "raid_level": "raid1", 00:16:58.965 "superblock": true, 00:16:58.965 "num_base_bdevs": 2, 00:16:58.965 "num_base_bdevs_discovered": 2, 00:16:58.965 "num_base_bdevs_operational": 2, 00:16:58.965 "base_bdevs_list": [ 00:16:58.965 { 00:16:58.965 "name": "pt1", 00:16:58.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.965 "is_configured": true, 00:16:58.965 "data_offset": 256, 00:16:58.965 "data_size": 7936 00:16:58.965 }, 00:16:58.965 { 00:16:58.965 "name": "pt2", 00:16:58.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.965 "is_configured": true, 00:16:58.965 "data_offset": 256, 00:16:58.965 "data_size": 7936 00:16:58.965 } 00:16:58.965 ] 00:16:58.965 } 00:16:58.965 } 00:16:58.965 }' 00:16:58.965 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:59.222 pt2' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.222 [2024-11-26 20:29:52.709614] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=950cee67-d450-4145-800a-9e277a958d2b 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 950cee67-d450-4145-800a-9e277a958d2b ']' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.222 [2024-11-26 20:29:52.753238] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.222 [2024-11-26 20:29:52.753278] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.222 [2024-11-26 20:29:52.753372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.222 [2024-11-26 20:29:52.753455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.222 [2024-11-26 20:29:52.753466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.222 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.482 [2024-11-26 20:29:52.897162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:59.482 [2024-11-26 20:29:52.899379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:59.482 [2024-11-26 20:29:52.899468] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:59.482 [2024-11-26 20:29:52.899523] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:59.482 [2024-11-26 20:29:52.899541] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.482 [2024-11-26 20:29:52.899551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:59.482 request: 00:16:59.482 { 00:16:59.482 "name": "raid_bdev1", 00:16:59.482 "raid_level": "raid1", 00:16:59.482 "base_bdevs": [ 00:16:59.482 "malloc1", 00:16:59.482 "malloc2" 00:16:59.482 ], 00:16:59.482 "superblock": false, 00:16:59.482 "method": "bdev_raid_create", 00:16:59.482 "req_id": 1 00:16:59.482 } 00:16:59.482 Got JSON-RPC error response 00:16:59.482 response: 00:16:59.482 { 00:16:59.482 "code": -17, 00:16:59.482 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:59.482 } 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.482 [2024-11-26 20:29:52.956969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.482 [2024-11-26 20:29:52.957070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.482 [2024-11-26 20:29:52.957094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:59.482 [2024-11-26 20:29:52.957105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.482 [2024-11-26 20:29:52.959587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.482 [2024-11-26 20:29:52.959657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.482 [2024-11-26 20:29:52.959751] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:59.482 [2024-11-26 20:29:52.959801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.482 pt1 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.482 20:29:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.483 20:29:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.483 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.483 "name": "raid_bdev1", 00:16:59.483 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:16:59.483 "strip_size_kb": 0, 00:16:59.483 "state": "configuring", 00:16:59.483 "raid_level": "raid1", 00:16:59.483 "superblock": true, 00:16:59.483 "num_base_bdevs": 2, 00:16:59.483 "num_base_bdevs_discovered": 1, 00:16:59.483 "num_base_bdevs_operational": 2, 00:16:59.483 "base_bdevs_list": [ 00:16:59.483 { 00:16:59.483 "name": "pt1", 00:16:59.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.483 "is_configured": true, 00:16:59.483 "data_offset": 256, 00:16:59.483 "data_size": 7936 00:16:59.483 }, 00:16:59.483 { 00:16:59.483 "name": null, 00:16:59.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.483 "is_configured": false, 00:16:59.483 "data_offset": 256, 00:16:59.483 "data_size": 7936 00:16:59.483 } 00:16:59.483 ] 00:16:59.483 }' 00:16:59.483 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.483 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.048 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:00.048 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:00.048 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:00.048 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.048 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.048 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.048 [2024-11-26 20:29:53.436154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.048 [2024-11-26 20:29:53.436236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.048 [2024-11-26 20:29:53.436265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:00.048 [2024-11-26 20:29:53.436276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.048 [2024-11-26 20:29:53.436789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.048 [2024-11-26 20:29:53.436820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.048 [2024-11-26 20:29:53.436909] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:00.048 [2024-11-26 20:29:53.436935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.048 [2024-11-26 20:29:53.437052] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:00.048 [2024-11-26 20:29:53.437089] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:00.048 [2024-11-26 20:29:53.437355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:00.048 [2024-11-26 20:29:53.437499] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:00.048 [2024-11-26 20:29:53.437525] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:17:00.048 [2024-11-26 20:29:53.437659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.048 pt2 00:17:00.048 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.048 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.049 "name": "raid_bdev1", 00:17:00.049 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:17:00.049 "strip_size_kb": 0, 00:17:00.049 "state": "online", 00:17:00.049 "raid_level": "raid1", 00:17:00.049 "superblock": true, 00:17:00.049 "num_base_bdevs": 2, 00:17:00.049 "num_base_bdevs_discovered": 2, 00:17:00.049 "num_base_bdevs_operational": 2, 00:17:00.049 "base_bdevs_list": [ 00:17:00.049 { 00:17:00.049 "name": "pt1", 00:17:00.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.049 "is_configured": true, 00:17:00.049 "data_offset": 256, 00:17:00.049 "data_size": 7936 00:17:00.049 }, 00:17:00.049 { 00:17:00.049 "name": "pt2", 00:17:00.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.049 "is_configured": true, 00:17:00.049 "data_offset": 256, 00:17:00.049 "data_size": 7936 00:17:00.049 } 00:17:00.049 ] 00:17:00.049 }' 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.049 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.616 [2024-11-26 20:29:53.887694] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:00.616 "name": "raid_bdev1", 00:17:00.616 "aliases": [ 00:17:00.616 "950cee67-d450-4145-800a-9e277a958d2b" 00:17:00.616 ], 00:17:00.616 "product_name": "Raid Volume", 00:17:00.616 "block_size": 4096, 00:17:00.616 "num_blocks": 7936, 00:17:00.616 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:17:00.616 "assigned_rate_limits": { 00:17:00.616 "rw_ios_per_sec": 0, 00:17:00.616 "rw_mbytes_per_sec": 0, 00:17:00.616 "r_mbytes_per_sec": 0, 00:17:00.616 "w_mbytes_per_sec": 0 00:17:00.616 }, 00:17:00.616 "claimed": false, 00:17:00.616 "zoned": false, 00:17:00.616 "supported_io_types": { 00:17:00.616 "read": true, 00:17:00.616 "write": true, 00:17:00.616 "unmap": false, 00:17:00.616 "flush": false, 00:17:00.616 "reset": true, 00:17:00.616 "nvme_admin": false, 00:17:00.616 "nvme_io": false, 00:17:00.616 "nvme_io_md": false, 00:17:00.616 "write_zeroes": true, 00:17:00.616 "zcopy": false, 00:17:00.616 "get_zone_info": false, 00:17:00.616 "zone_management": false, 00:17:00.616 "zone_append": false, 00:17:00.616 "compare": false, 00:17:00.616 "compare_and_write": false, 00:17:00.616 "abort": false, 00:17:00.616 "seek_hole": false, 00:17:00.616 "seek_data": false, 00:17:00.616 "copy": false, 00:17:00.616 "nvme_iov_md": false 00:17:00.616 }, 00:17:00.616 "memory_domains": [ 00:17:00.616 { 00:17:00.616 "dma_device_id": "system", 00:17:00.616 "dma_device_type": 1 00:17:00.616 }, 00:17:00.616 { 00:17:00.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.616 "dma_device_type": 2 00:17:00.616 }, 00:17:00.616 { 00:17:00.616 "dma_device_id": "system", 00:17:00.616 "dma_device_type": 1 00:17:00.616 }, 00:17:00.616 { 00:17:00.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.616 "dma_device_type": 2 00:17:00.616 } 00:17:00.616 ], 00:17:00.616 "driver_specific": { 00:17:00.616 "raid": { 00:17:00.616 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:17:00.616 "strip_size_kb": 0, 00:17:00.616 "state": "online", 00:17:00.616 "raid_level": "raid1", 00:17:00.616 "superblock": true, 00:17:00.616 "num_base_bdevs": 2, 00:17:00.616 "num_base_bdevs_discovered": 2, 00:17:00.616 "num_base_bdevs_operational": 2, 00:17:00.616 "base_bdevs_list": [ 00:17:00.616 { 00:17:00.616 "name": "pt1", 00:17:00.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.616 "is_configured": true, 00:17:00.616 "data_offset": 256, 00:17:00.616 "data_size": 7936 00:17:00.616 }, 00:17:00.616 { 00:17:00.616 "name": "pt2", 00:17:00.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.616 "is_configured": true, 00:17:00.616 "data_offset": 256, 00:17:00.616 "data_size": 7936 00:17:00.616 } 00:17:00.616 ] 00:17:00.616 } 00:17:00.616 } 00:17:00.616 }' 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:00.616 pt2' 00:17:00.616 20:29:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.616 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:00.616 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.616 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.616 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:00.616 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.616 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.616 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.616 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.617 [2024-11-26 20:29:54.115326] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 950cee67-d450-4145-800a-9e277a958d2b '!=' 950cee67-d450-4145-800a-9e277a958d2b ']' 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.617 [2024-11-26 20:29:54.158967] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.617 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.875 "name": "raid_bdev1", 00:17:00.875 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:17:00.875 "strip_size_kb": 0, 00:17:00.875 "state": "online", 00:17:00.875 "raid_level": "raid1", 00:17:00.875 "superblock": true, 00:17:00.875 "num_base_bdevs": 2, 00:17:00.875 "num_base_bdevs_discovered": 1, 00:17:00.875 "num_base_bdevs_operational": 1, 00:17:00.875 "base_bdevs_list": [ 00:17:00.875 { 00:17:00.875 "name": null, 00:17:00.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.875 "is_configured": false, 00:17:00.875 "data_offset": 0, 00:17:00.875 "data_size": 7936 00:17:00.875 }, 00:17:00.875 { 00:17:00.875 "name": "pt2", 00:17:00.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.875 "is_configured": true, 00:17:00.875 "data_offset": 256, 00:17:00.875 "data_size": 7936 00:17:00.875 } 00:17:00.875 ] 00:17:00.875 }' 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.875 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 [2024-11-26 20:29:54.610180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.134 [2024-11-26 20:29:54.610217] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.134 [2024-11-26 20:29:54.610312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.134 [2024-11-26 20:29:54.610361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.134 [2024-11-26 20:29:54.610371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.134 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.392 [2024-11-26 20:29:54.686053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.392 [2024-11-26 20:29:54.686168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.392 [2024-11-26 20:29:54.686225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:01.392 [2024-11-26 20:29:54.686262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.392 [2024-11-26 20:29:54.688788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.392 [2024-11-26 20:29:54.688861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.392 [2024-11-26 20:29:54.688982] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:01.392 [2024-11-26 20:29:54.689090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.392 [2024-11-26 20:29:54.689229] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:17:01.392 [2024-11-26 20:29:54.689271] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:01.392 [2024-11-26 20:29:54.689556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:01.392 [2024-11-26 20:29:54.689754] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:17:01.392 [2024-11-26 20:29:54.689824] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:17:01.392 [2024-11-26 20:29:54.690049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.392 pt2 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.392 "name": "raid_bdev1", 00:17:01.392 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:17:01.392 "strip_size_kb": 0, 00:17:01.392 "state": "online", 00:17:01.392 "raid_level": "raid1", 00:17:01.392 "superblock": true, 00:17:01.392 "num_base_bdevs": 2, 00:17:01.392 "num_base_bdevs_discovered": 1, 00:17:01.392 "num_base_bdevs_operational": 1, 00:17:01.392 "base_bdevs_list": [ 00:17:01.392 { 00:17:01.392 "name": null, 00:17:01.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.392 "is_configured": false, 00:17:01.392 "data_offset": 256, 00:17:01.392 "data_size": 7936 00:17:01.392 }, 00:17:01.392 { 00:17:01.392 "name": "pt2", 00:17:01.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.392 "is_configured": true, 00:17:01.392 "data_offset": 256, 00:17:01.392 "data_size": 7936 00:17:01.392 } 00:17:01.392 ] 00:17:01.392 }' 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.392 20:29:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.650 [2024-11-26 20:29:55.141354] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.650 [2024-11-26 20:29:55.141453] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.650 [2024-11-26 20:29:55.141558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.650 [2024-11-26 20:29:55.141637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.650 [2024-11-26 20:29:55.141708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.650 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.651 [2024-11-26 20:29:55.189260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.651 [2024-11-26 20:29:55.189395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.651 [2024-11-26 20:29:55.189451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:01.651 [2024-11-26 20:29:55.189499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.651 [2024-11-26 20:29:55.192046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.651 [2024-11-26 20:29:55.192137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.651 [2024-11-26 20:29:55.192256] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:01.651 [2024-11-26 20:29:55.192345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.651 [2024-11-26 20:29:55.192515] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:01.651 [2024-11-26 20:29:55.192587] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.651 [2024-11-26 20:29:55.192670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:17:01.651 [2024-11-26 20:29:55.192791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.651 [2024-11-26 20:29:55.192920] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:01.651 [2024-11-26 20:29:55.192966] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:01.651 [2024-11-26 20:29:55.193270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:01.651 [2024-11-26 20:29:55.193449] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:01.651 [2024-11-26 20:29:55.193466] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:01.651 [2024-11-26 20:29:55.193671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.651 pt1 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.651 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.909 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.909 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.909 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.909 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.909 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.909 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.909 "name": "raid_bdev1", 00:17:01.909 "uuid": "950cee67-d450-4145-800a-9e277a958d2b", 00:17:01.909 "strip_size_kb": 0, 00:17:01.909 "state": "online", 00:17:01.909 "raid_level": "raid1", 00:17:01.909 "superblock": true, 00:17:01.909 "num_base_bdevs": 2, 00:17:01.909 "num_base_bdevs_discovered": 1, 00:17:01.909 "num_base_bdevs_operational": 1, 00:17:01.909 "base_bdevs_list": [ 00:17:01.909 { 00:17:01.909 "name": null, 00:17:01.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.909 "is_configured": false, 00:17:01.909 "data_offset": 256, 00:17:01.909 "data_size": 7936 00:17:01.909 }, 00:17:01.909 { 00:17:01.909 "name": "pt2", 00:17:01.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.909 "is_configured": true, 00:17:01.909 "data_offset": 256, 00:17:01.909 "data_size": 7936 00:17:01.909 } 00:17:01.909 ] 00:17:01.909 }' 00:17:01.909 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.909 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.168 [2024-11-26 20:29:55.673179] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 950cee67-d450-4145-800a-9e277a958d2b '!=' 950cee67-d450-4145-800a-9e277a958d2b ']' 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 97154 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 97154 ']' 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 97154 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:17:02.168 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.428 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97154 00:17:02.428 killing process with pid 97154 00:17:02.428 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:02.428 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:02.428 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97154' 00:17:02.428 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 97154 00:17:02.428 [2024-11-26 20:29:55.748310] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.428 [2024-11-26 20:29:55.748424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.428 20:29:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 97154 00:17:02.428 [2024-11-26 20:29:55.748498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.428 [2024-11-26 20:29:55.748509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:02.428 [2024-11-26 20:29:55.784375] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.686 ************************************ 00:17:02.686 END TEST raid_superblock_test_4k 00:17:02.686 ************************************ 00:17:02.686 20:29:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:02.686 00:17:02.686 real 0m5.237s 00:17:02.686 user 0m8.422s 00:17:02.686 sys 0m1.136s 00:17:02.686 20:29:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.686 20:29:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.686 20:29:56 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:02.686 20:29:56 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:02.686 20:29:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:02.686 20:29:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.686 20:29:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.686 ************************************ 00:17:02.686 START TEST raid_rebuild_test_sb_4k 00:17:02.686 ************************************ 00:17:02.686 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:02.686 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:02.686 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:02.686 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:02.686 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:02.686 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:02.686 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:02.687 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:02.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97472 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97472 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 97472 ']' 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.945 20:29:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.945 [2024-11-26 20:29:56.324572] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:02.945 [2024-11-26 20:29:56.324797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97472 ] 00:17:02.945 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:02.945 Zero copy mechanism will not be used. 00:17:02.945 [2024-11-26 20:29:56.475180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.204 [2024-11-26 20:29:56.558386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.204 [2024-11-26 20:29:56.633388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.204 [2024-11-26 20:29:56.633516] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.771 BaseBdev1_malloc 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.771 [2024-11-26 20:29:57.271605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:03.771 [2024-11-26 20:29:57.271693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.771 [2024-11-26 20:29:57.271722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:03.771 [2024-11-26 20:29:57.271747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.771 [2024-11-26 20:29:57.274164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.771 [2024-11-26 20:29:57.274256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:03.771 BaseBdev1 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:03.771 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.772 BaseBdev2_malloc 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.772 [2024-11-26 20:29:57.311327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:03.772 [2024-11-26 20:29:57.311387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.772 [2024-11-26 20:29:57.311407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:03.772 [2024-11-26 20:29:57.311416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.772 [2024-11-26 20:29:57.313668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.772 [2024-11-26 20:29:57.313702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:03.772 BaseBdev2 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.772 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.030 spare_malloc 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.030 spare_delay 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.030 [2024-11-26 20:29:57.358006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:04.030 [2024-11-26 20:29:57.358108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.030 [2024-11-26 20:29:57.358151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:04.030 [2024-11-26 20:29:57.358184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.030 [2024-11-26 20:29:57.360311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.030 [2024-11-26 20:29:57.360381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:04.030 spare 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:04.030 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.031 [2024-11-26 20:29:57.370025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.031 [2024-11-26 20:29:57.371935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.031 [2024-11-26 20:29:57.372130] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:04.031 [2024-11-26 20:29:57.372175] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:04.031 [2024-11-26 20:29:57.372447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:04.031 [2024-11-26 20:29:57.372640] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:04.031 [2024-11-26 20:29:57.372686] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:04.031 [2024-11-26 20:29:57.372881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.031 "name": "raid_bdev1", 00:17:04.031 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:04.031 "strip_size_kb": 0, 00:17:04.031 "state": "online", 00:17:04.031 "raid_level": "raid1", 00:17:04.031 "superblock": true, 00:17:04.031 "num_base_bdevs": 2, 00:17:04.031 "num_base_bdevs_discovered": 2, 00:17:04.031 "num_base_bdevs_operational": 2, 00:17:04.031 "base_bdevs_list": [ 00:17:04.031 { 00:17:04.031 "name": "BaseBdev1", 00:17:04.031 "uuid": "5d1c8075-28ac-5b87-a9d4-2b4e6db2152d", 00:17:04.031 "is_configured": true, 00:17:04.031 "data_offset": 256, 00:17:04.031 "data_size": 7936 00:17:04.031 }, 00:17:04.031 { 00:17:04.031 "name": "BaseBdev2", 00:17:04.031 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:04.031 "is_configured": true, 00:17:04.031 "data_offset": 256, 00:17:04.031 "data_size": 7936 00:17:04.031 } 00:17:04.031 ] 00:17:04.031 }' 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.031 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.290 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.290 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.290 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.290 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:04.290 [2024-11-26 20:29:57.829548] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.290 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.548 20:29:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:04.807 [2024-11-26 20:29:58.128836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:04.807 /dev/nbd0 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.807 1+0 records in 00:17:04.807 1+0 records out 00:17:04.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316953 s, 12.9 MB/s 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:04.807 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:05.374 7936+0 records in 00:17:05.374 7936+0 records out 00:17:05.374 32505856 bytes (33 MB, 31 MiB) copied, 0.643998 s, 50.5 MB/s 00:17:05.374 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:05.374 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.374 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:05.374 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.374 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:05.374 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.374 20:29:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.652 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.653 [2024-11-26 20:29:59.101889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.653 [2024-11-26 20:29:59.121987] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.653 "name": "raid_bdev1", 00:17:05.653 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:05.653 "strip_size_kb": 0, 00:17:05.653 "state": "online", 00:17:05.653 "raid_level": "raid1", 00:17:05.653 "superblock": true, 00:17:05.653 "num_base_bdevs": 2, 00:17:05.653 "num_base_bdevs_discovered": 1, 00:17:05.653 "num_base_bdevs_operational": 1, 00:17:05.653 "base_bdevs_list": [ 00:17:05.653 { 00:17:05.653 "name": null, 00:17:05.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.653 "is_configured": false, 00:17:05.653 "data_offset": 0, 00:17:05.653 "data_size": 7936 00:17:05.653 }, 00:17:05.653 { 00:17:05.653 "name": "BaseBdev2", 00:17:05.653 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:05.653 "is_configured": true, 00:17:05.653 "data_offset": 256, 00:17:05.653 "data_size": 7936 00:17:05.653 } 00:17:05.653 ] 00:17:05.653 }' 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.653 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.217 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.217 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.217 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.217 [2024-11-26 20:29:59.629307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.217 [2024-11-26 20:29:59.634198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:17:06.217 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.217 20:29:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:06.217 [2024-11-26 20:29:59.636405] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.150 "name": "raid_bdev1", 00:17:07.150 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:07.150 "strip_size_kb": 0, 00:17:07.150 "state": "online", 00:17:07.150 "raid_level": "raid1", 00:17:07.150 "superblock": true, 00:17:07.150 "num_base_bdevs": 2, 00:17:07.150 "num_base_bdevs_discovered": 2, 00:17:07.150 "num_base_bdevs_operational": 2, 00:17:07.150 "process": { 00:17:07.150 "type": "rebuild", 00:17:07.150 "target": "spare", 00:17:07.150 "progress": { 00:17:07.150 "blocks": 2560, 00:17:07.150 "percent": 32 00:17:07.150 } 00:17:07.150 }, 00:17:07.150 "base_bdevs_list": [ 00:17:07.150 { 00:17:07.150 "name": "spare", 00:17:07.150 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:07.150 "is_configured": true, 00:17:07.150 "data_offset": 256, 00:17:07.150 "data_size": 7936 00:17:07.150 }, 00:17:07.150 { 00:17:07.150 "name": "BaseBdev2", 00:17:07.150 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:07.150 "is_configured": true, 00:17:07.150 "data_offset": 256, 00:17:07.150 "data_size": 7936 00:17:07.150 } 00:17:07.150 ] 00:17:07.150 }' 00:17:07.150 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.408 [2024-11-26 20:30:00.785473] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.408 [2024-11-26 20:30:00.845356] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.408 [2024-11-26 20:30:00.845449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.408 [2024-11-26 20:30:00.845471] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.408 [2024-11-26 20:30:00.845480] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.408 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.409 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.409 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.409 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.409 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.409 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.409 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.409 "name": "raid_bdev1", 00:17:07.409 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:07.409 "strip_size_kb": 0, 00:17:07.409 "state": "online", 00:17:07.409 "raid_level": "raid1", 00:17:07.409 "superblock": true, 00:17:07.409 "num_base_bdevs": 2, 00:17:07.409 "num_base_bdevs_discovered": 1, 00:17:07.409 "num_base_bdevs_operational": 1, 00:17:07.409 "base_bdevs_list": [ 00:17:07.409 { 00:17:07.409 "name": null, 00:17:07.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.409 "is_configured": false, 00:17:07.409 "data_offset": 0, 00:17:07.409 "data_size": 7936 00:17:07.409 }, 00:17:07.409 { 00:17:07.409 "name": "BaseBdev2", 00:17:07.409 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:07.409 "is_configured": true, 00:17:07.409 "data_offset": 256, 00:17:07.409 "data_size": 7936 00:17:07.409 } 00:17:07.409 ] 00:17:07.409 }' 00:17:07.409 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.409 20:30:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.974 "name": "raid_bdev1", 00:17:07.974 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:07.974 "strip_size_kb": 0, 00:17:07.974 "state": "online", 00:17:07.974 "raid_level": "raid1", 00:17:07.974 "superblock": true, 00:17:07.974 "num_base_bdevs": 2, 00:17:07.974 "num_base_bdevs_discovered": 1, 00:17:07.974 "num_base_bdevs_operational": 1, 00:17:07.974 "base_bdevs_list": [ 00:17:07.974 { 00:17:07.974 "name": null, 00:17:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.974 "is_configured": false, 00:17:07.974 "data_offset": 0, 00:17:07.974 "data_size": 7936 00:17:07.974 }, 00:17:07.974 { 00:17:07.974 "name": "BaseBdev2", 00:17:07.974 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:07.974 "is_configured": true, 00:17:07.974 "data_offset": 256, 00:17:07.974 "data_size": 7936 00:17:07.974 } 00:17:07.974 ] 00:17:07.974 }' 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.974 [2024-11-26 20:30:01.486430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.974 [2024-11-26 20:30:01.491355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.974 20:30:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:07.974 [2024-11-26 20:30:01.493537] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.348 "name": "raid_bdev1", 00:17:09.348 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:09.348 "strip_size_kb": 0, 00:17:09.348 "state": "online", 00:17:09.348 "raid_level": "raid1", 00:17:09.348 "superblock": true, 00:17:09.348 "num_base_bdevs": 2, 00:17:09.348 "num_base_bdevs_discovered": 2, 00:17:09.348 "num_base_bdevs_operational": 2, 00:17:09.348 "process": { 00:17:09.348 "type": "rebuild", 00:17:09.348 "target": "spare", 00:17:09.348 "progress": { 00:17:09.348 "blocks": 2560, 00:17:09.348 "percent": 32 00:17:09.348 } 00:17:09.348 }, 00:17:09.348 "base_bdevs_list": [ 00:17:09.348 { 00:17:09.348 "name": "spare", 00:17:09.348 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:09.348 "is_configured": true, 00:17:09.348 "data_offset": 256, 00:17:09.348 "data_size": 7936 00:17:09.348 }, 00:17:09.348 { 00:17:09.348 "name": "BaseBdev2", 00:17:09.348 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:09.348 "is_configured": true, 00:17:09.348 "data_offset": 256, 00:17:09.348 "data_size": 7936 00:17:09.348 } 00:17:09.348 ] 00:17:09.348 }' 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:09.348 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=591 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.348 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.348 "name": "raid_bdev1", 00:17:09.348 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:09.348 "strip_size_kb": 0, 00:17:09.348 "state": "online", 00:17:09.348 "raid_level": "raid1", 00:17:09.348 "superblock": true, 00:17:09.348 "num_base_bdevs": 2, 00:17:09.348 "num_base_bdevs_discovered": 2, 00:17:09.348 "num_base_bdevs_operational": 2, 00:17:09.348 "process": { 00:17:09.348 "type": "rebuild", 00:17:09.348 "target": "spare", 00:17:09.348 "progress": { 00:17:09.348 "blocks": 2816, 00:17:09.348 "percent": 35 00:17:09.348 } 00:17:09.348 }, 00:17:09.349 "base_bdevs_list": [ 00:17:09.349 { 00:17:09.349 "name": "spare", 00:17:09.349 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:09.349 "is_configured": true, 00:17:09.349 "data_offset": 256, 00:17:09.349 "data_size": 7936 00:17:09.349 }, 00:17:09.349 { 00:17:09.349 "name": "BaseBdev2", 00:17:09.349 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:09.349 "is_configured": true, 00:17:09.349 "data_offset": 256, 00:17:09.349 "data_size": 7936 00:17:09.349 } 00:17:09.349 ] 00:17:09.349 }' 00:17:09.349 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.349 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.349 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.349 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.349 20:30:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.282 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.282 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.282 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.282 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.282 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.282 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.282 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.282 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.540 "name": "raid_bdev1", 00:17:10.540 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:10.540 "strip_size_kb": 0, 00:17:10.540 "state": "online", 00:17:10.540 "raid_level": "raid1", 00:17:10.540 "superblock": true, 00:17:10.540 "num_base_bdevs": 2, 00:17:10.540 "num_base_bdevs_discovered": 2, 00:17:10.540 "num_base_bdevs_operational": 2, 00:17:10.540 "process": { 00:17:10.540 "type": "rebuild", 00:17:10.540 "target": "spare", 00:17:10.540 "progress": { 00:17:10.540 "blocks": 5888, 00:17:10.540 "percent": 74 00:17:10.540 } 00:17:10.540 }, 00:17:10.540 "base_bdevs_list": [ 00:17:10.540 { 00:17:10.540 "name": "spare", 00:17:10.540 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:10.540 "is_configured": true, 00:17:10.540 "data_offset": 256, 00:17:10.540 "data_size": 7936 00:17:10.540 }, 00:17:10.540 { 00:17:10.540 "name": "BaseBdev2", 00:17:10.540 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:10.540 "is_configured": true, 00:17:10.540 "data_offset": 256, 00:17:10.540 "data_size": 7936 00:17:10.540 } 00:17:10.540 ] 00:17:10.540 }' 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.540 20:30:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.105 [2024-11-26 20:30:04.614372] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:11.105 [2024-11-26 20:30:04.614489] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:11.105 [2024-11-26 20:30:04.614666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.730 20:30:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.730 "name": "raid_bdev1", 00:17:11.730 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:11.730 "strip_size_kb": 0, 00:17:11.730 "state": "online", 00:17:11.730 "raid_level": "raid1", 00:17:11.730 "superblock": true, 00:17:11.730 "num_base_bdevs": 2, 00:17:11.730 "num_base_bdevs_discovered": 2, 00:17:11.730 "num_base_bdevs_operational": 2, 00:17:11.730 "base_bdevs_list": [ 00:17:11.730 { 00:17:11.730 "name": "spare", 00:17:11.730 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:11.730 "is_configured": true, 00:17:11.730 "data_offset": 256, 00:17:11.730 "data_size": 7936 00:17:11.730 }, 00:17:11.730 { 00:17:11.730 "name": "BaseBdev2", 00:17:11.730 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:11.730 "is_configured": true, 00:17:11.730 "data_offset": 256, 00:17:11.730 "data_size": 7936 00:17:11.730 } 00:17:11.730 ] 00:17:11.730 }' 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.730 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.731 "name": "raid_bdev1", 00:17:11.731 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:11.731 "strip_size_kb": 0, 00:17:11.731 "state": "online", 00:17:11.731 "raid_level": "raid1", 00:17:11.731 "superblock": true, 00:17:11.731 "num_base_bdevs": 2, 00:17:11.731 "num_base_bdevs_discovered": 2, 00:17:11.731 "num_base_bdevs_operational": 2, 00:17:11.731 "base_bdevs_list": [ 00:17:11.731 { 00:17:11.731 "name": "spare", 00:17:11.731 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:11.731 "is_configured": true, 00:17:11.731 "data_offset": 256, 00:17:11.731 "data_size": 7936 00:17:11.731 }, 00:17:11.731 { 00:17:11.731 "name": "BaseBdev2", 00:17:11.731 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:11.731 "is_configured": true, 00:17:11.731 "data_offset": 256, 00:17:11.731 "data_size": 7936 00:17:11.731 } 00:17:11.731 ] 00:17:11.731 }' 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.731 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.988 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.988 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.988 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.988 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.988 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.988 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.988 "name": "raid_bdev1", 00:17:11.988 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:11.988 "strip_size_kb": 0, 00:17:11.988 "state": "online", 00:17:11.988 "raid_level": "raid1", 00:17:11.988 "superblock": true, 00:17:11.988 "num_base_bdevs": 2, 00:17:11.988 "num_base_bdevs_discovered": 2, 00:17:11.988 "num_base_bdevs_operational": 2, 00:17:11.988 "base_bdevs_list": [ 00:17:11.988 { 00:17:11.988 "name": "spare", 00:17:11.988 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:11.988 "is_configured": true, 00:17:11.988 "data_offset": 256, 00:17:11.988 "data_size": 7936 00:17:11.988 }, 00:17:11.988 { 00:17:11.988 "name": "BaseBdev2", 00:17:11.988 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:11.988 "is_configured": true, 00:17:11.988 "data_offset": 256, 00:17:11.988 "data_size": 7936 00:17:11.988 } 00:17:11.988 ] 00:17:11.988 }' 00:17:11.988 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.988 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.246 [2024-11-26 20:30:05.750765] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.246 [2024-11-26 20:30:05.750813] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.246 [2024-11-26 20:30:05.750911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.246 [2024-11-26 20:30:05.750983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.246 [2024-11-26 20:30:05.750998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:12.246 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.504 20:30:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:12.504 /dev/nbd0 00:17:12.504 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.762 1+0 records in 00:17:12.762 1+0 records out 00:17:12.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275733 s, 14.9 MB/s 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.762 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:13.022 /dev/nbd1 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:13.022 1+0 records in 00:17:13.022 1+0 records out 00:17:13.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346181 s, 11.8 MB/s 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.022 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.280 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.281 20:30:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.538 [2024-11-26 20:30:07.051451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.538 [2024-11-26 20:30:07.051551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.538 [2024-11-26 20:30:07.051576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:13.538 [2024-11-26 20:30:07.051591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.538 [2024-11-26 20:30:07.054251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.538 [2024-11-26 20:30:07.054301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.538 [2024-11-26 20:30:07.054395] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:13.538 [2024-11-26 20:30:07.054450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.538 [2024-11-26 20:30:07.054578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.538 spare 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.538 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.794 [2024-11-26 20:30:07.154523] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:13.794 [2024-11-26 20:30:07.154576] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:13.794 [2024-11-26 20:30:07.154961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:17:13.794 [2024-11-26 20:30:07.155178] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:13.794 [2024-11-26 20:30:07.155194] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:13.794 [2024-11-26 20:30:07.155414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.794 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.794 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.795 "name": "raid_bdev1", 00:17:13.795 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:13.795 "strip_size_kb": 0, 00:17:13.795 "state": "online", 00:17:13.795 "raid_level": "raid1", 00:17:13.795 "superblock": true, 00:17:13.795 "num_base_bdevs": 2, 00:17:13.795 "num_base_bdevs_discovered": 2, 00:17:13.795 "num_base_bdevs_operational": 2, 00:17:13.795 "base_bdevs_list": [ 00:17:13.795 { 00:17:13.795 "name": "spare", 00:17:13.795 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:13.795 "is_configured": true, 00:17:13.795 "data_offset": 256, 00:17:13.795 "data_size": 7936 00:17:13.795 }, 00:17:13.795 { 00:17:13.795 "name": "BaseBdev2", 00:17:13.795 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:13.795 "is_configured": true, 00:17:13.795 "data_offset": 256, 00:17:13.795 "data_size": 7936 00:17:13.795 } 00:17:13.795 ] 00:17:13.795 }' 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.795 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.368 "name": "raid_bdev1", 00:17:14.368 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:14.368 "strip_size_kb": 0, 00:17:14.368 "state": "online", 00:17:14.368 "raid_level": "raid1", 00:17:14.368 "superblock": true, 00:17:14.368 "num_base_bdevs": 2, 00:17:14.368 "num_base_bdevs_discovered": 2, 00:17:14.368 "num_base_bdevs_operational": 2, 00:17:14.368 "base_bdevs_list": [ 00:17:14.368 { 00:17:14.368 "name": "spare", 00:17:14.368 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:14.368 "is_configured": true, 00:17:14.368 "data_offset": 256, 00:17:14.368 "data_size": 7936 00:17:14.368 }, 00:17:14.368 { 00:17:14.368 "name": "BaseBdev2", 00:17:14.368 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:14.368 "is_configured": true, 00:17:14.368 "data_offset": 256, 00:17:14.368 "data_size": 7936 00:17:14.368 } 00:17:14.368 ] 00:17:14.368 }' 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.368 [2024-11-26 20:30:07.822315] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.368 "name": "raid_bdev1", 00:17:14.368 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:14.368 "strip_size_kb": 0, 00:17:14.368 "state": "online", 00:17:14.368 "raid_level": "raid1", 00:17:14.368 "superblock": true, 00:17:14.368 "num_base_bdevs": 2, 00:17:14.368 "num_base_bdevs_discovered": 1, 00:17:14.368 "num_base_bdevs_operational": 1, 00:17:14.368 "base_bdevs_list": [ 00:17:14.368 { 00:17:14.368 "name": null, 00:17:14.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.368 "is_configured": false, 00:17:14.368 "data_offset": 0, 00:17:14.368 "data_size": 7936 00:17:14.368 }, 00:17:14.368 { 00:17:14.368 "name": "BaseBdev2", 00:17:14.368 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:14.368 "is_configured": true, 00:17:14.368 "data_offset": 256, 00:17:14.368 "data_size": 7936 00:17:14.368 } 00:17:14.368 ] 00:17:14.368 }' 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.368 20:30:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 20:30:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.932 20:30:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.932 20:30:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.932 [2024-11-26 20:30:08.249590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.932 [2024-11-26 20:30:08.249830] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.932 [2024-11-26 20:30:08.249868] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.932 [2024-11-26 20:30:08.249918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.932 [2024-11-26 20:30:08.254537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:17:14.932 20:30:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.932 20:30:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:14.932 [2024-11-26 20:30:08.256686] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.865 "name": "raid_bdev1", 00:17:15.865 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:15.865 "strip_size_kb": 0, 00:17:15.865 "state": "online", 00:17:15.865 "raid_level": "raid1", 00:17:15.865 "superblock": true, 00:17:15.865 "num_base_bdevs": 2, 00:17:15.865 "num_base_bdevs_discovered": 2, 00:17:15.865 "num_base_bdevs_operational": 2, 00:17:15.865 "process": { 00:17:15.865 "type": "rebuild", 00:17:15.865 "target": "spare", 00:17:15.865 "progress": { 00:17:15.865 "blocks": 2560, 00:17:15.865 "percent": 32 00:17:15.865 } 00:17:15.865 }, 00:17:15.865 "base_bdevs_list": [ 00:17:15.865 { 00:17:15.865 "name": "spare", 00:17:15.865 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:15.865 "is_configured": true, 00:17:15.865 "data_offset": 256, 00:17:15.865 "data_size": 7936 00:17:15.865 }, 00:17:15.865 { 00:17:15.865 "name": "BaseBdev2", 00:17:15.865 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:15.865 "is_configured": true, 00:17:15.865 "data_offset": 256, 00:17:15.865 "data_size": 7936 00:17:15.865 } 00:17:15.865 ] 00:17:15.865 }' 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.865 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.123 [2024-11-26 20:30:09.417280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.123 [2024-11-26 20:30:09.464669] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:16.123 [2024-11-26 20:30:09.464852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.123 [2024-11-26 20:30:09.464898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.123 [2024-11-26 20:30:09.464924] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:16.123 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.123 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.124 "name": "raid_bdev1", 00:17:16.124 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:16.124 "strip_size_kb": 0, 00:17:16.124 "state": "online", 00:17:16.124 "raid_level": "raid1", 00:17:16.124 "superblock": true, 00:17:16.124 "num_base_bdevs": 2, 00:17:16.124 "num_base_bdevs_discovered": 1, 00:17:16.124 "num_base_bdevs_operational": 1, 00:17:16.124 "base_bdevs_list": [ 00:17:16.124 { 00:17:16.124 "name": null, 00:17:16.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.124 "is_configured": false, 00:17:16.124 "data_offset": 0, 00:17:16.124 "data_size": 7936 00:17:16.124 }, 00:17:16.124 { 00:17:16.124 "name": "BaseBdev2", 00:17:16.124 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:16.124 "is_configured": true, 00:17:16.124 "data_offset": 256, 00:17:16.124 "data_size": 7936 00:17:16.124 } 00:17:16.124 ] 00:17:16.124 }' 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.124 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.380 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.380 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.380 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.638 [2024-11-26 20:30:09.937947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.638 [2024-11-26 20:30:09.938047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.638 [2024-11-26 20:30:09.938077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:16.638 [2024-11-26 20:30:09.938087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.638 [2024-11-26 20:30:09.938634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.638 [2024-11-26 20:30:09.938657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.638 [2024-11-26 20:30:09.938780] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.638 [2024-11-26 20:30:09.938796] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.638 [2024-11-26 20:30:09.938816] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:16.638 [2024-11-26 20:30:09.938848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.638 spare 00:17:16.638 [2024-11-26 20:30:09.943512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:16.638 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.638 20:30:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:16.638 [2024-11-26 20:30:09.945771] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.570 "name": "raid_bdev1", 00:17:17.570 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:17.570 "strip_size_kb": 0, 00:17:17.570 "state": "online", 00:17:17.570 "raid_level": "raid1", 00:17:17.570 "superblock": true, 00:17:17.570 "num_base_bdevs": 2, 00:17:17.570 "num_base_bdevs_discovered": 2, 00:17:17.570 "num_base_bdevs_operational": 2, 00:17:17.570 "process": { 00:17:17.570 "type": "rebuild", 00:17:17.570 "target": "spare", 00:17:17.570 "progress": { 00:17:17.570 "blocks": 2560, 00:17:17.570 "percent": 32 00:17:17.570 } 00:17:17.570 }, 00:17:17.570 "base_bdevs_list": [ 00:17:17.570 { 00:17:17.570 "name": "spare", 00:17:17.570 "uuid": "eec53323-c836-5ed3-9bf0-215c0e5b0864", 00:17:17.570 "is_configured": true, 00:17:17.570 "data_offset": 256, 00:17:17.570 "data_size": 7936 00:17:17.570 }, 00:17:17.570 { 00:17:17.570 "name": "BaseBdev2", 00:17:17.570 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:17.570 "is_configured": true, 00:17:17.570 "data_offset": 256, 00:17:17.570 "data_size": 7936 00:17:17.570 } 00:17:17.570 ] 00:17:17.570 }' 00:17:17.570 20:30:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.570 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.570 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.570 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.570 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.570 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.570 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.570 [2024-11-26 20:30:11.098194] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.842 [2024-11-26 20:30:11.153998] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.842 [2024-11-26 20:30:11.154110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.842 [2024-11-26 20:30:11.154130] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.842 [2024-11-26 20:30:11.154141] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.842 "name": "raid_bdev1", 00:17:17.842 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:17.842 "strip_size_kb": 0, 00:17:17.842 "state": "online", 00:17:17.842 "raid_level": "raid1", 00:17:17.842 "superblock": true, 00:17:17.842 "num_base_bdevs": 2, 00:17:17.842 "num_base_bdevs_discovered": 1, 00:17:17.842 "num_base_bdevs_operational": 1, 00:17:17.842 "base_bdevs_list": [ 00:17:17.842 { 00:17:17.842 "name": null, 00:17:17.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.842 "is_configured": false, 00:17:17.842 "data_offset": 0, 00:17:17.842 "data_size": 7936 00:17:17.842 }, 00:17:17.842 { 00:17:17.842 "name": "BaseBdev2", 00:17:17.842 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:17.842 "is_configured": true, 00:17:17.842 "data_offset": 256, 00:17:17.842 "data_size": 7936 00:17:17.842 } 00:17:17.842 ] 00:17:17.842 }' 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.842 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.099 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.356 "name": "raid_bdev1", 00:17:18.356 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:18.356 "strip_size_kb": 0, 00:17:18.356 "state": "online", 00:17:18.356 "raid_level": "raid1", 00:17:18.356 "superblock": true, 00:17:18.356 "num_base_bdevs": 2, 00:17:18.356 "num_base_bdevs_discovered": 1, 00:17:18.356 "num_base_bdevs_operational": 1, 00:17:18.356 "base_bdevs_list": [ 00:17:18.356 { 00:17:18.356 "name": null, 00:17:18.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.356 "is_configured": false, 00:17:18.356 "data_offset": 0, 00:17:18.356 "data_size": 7936 00:17:18.356 }, 00:17:18.356 { 00:17:18.356 "name": "BaseBdev2", 00:17:18.356 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:18.356 "is_configured": true, 00:17:18.356 "data_offset": 256, 00:17:18.356 "data_size": 7936 00:17:18.356 } 00:17:18.356 ] 00:17:18.356 }' 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.356 [2024-11-26 20:30:11.782865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:18.356 [2024-11-26 20:30:11.782958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.356 [2024-11-26 20:30:11.782981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:18.356 [2024-11-26 20:30:11.782994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.356 [2024-11-26 20:30:11.783456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.356 [2024-11-26 20:30:11.783481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.356 [2024-11-26 20:30:11.783563] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:18.356 [2024-11-26 20:30:11.783585] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.356 [2024-11-26 20:30:11.783610] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:18.356 [2024-11-26 20:30:11.783647] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:18.356 BaseBdev1 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.356 20:30:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.293 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.552 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.552 "name": "raid_bdev1", 00:17:19.552 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:19.552 "strip_size_kb": 0, 00:17:19.552 "state": "online", 00:17:19.552 "raid_level": "raid1", 00:17:19.552 "superblock": true, 00:17:19.552 "num_base_bdevs": 2, 00:17:19.552 "num_base_bdevs_discovered": 1, 00:17:19.552 "num_base_bdevs_operational": 1, 00:17:19.552 "base_bdevs_list": [ 00:17:19.552 { 00:17:19.552 "name": null, 00:17:19.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.552 "is_configured": false, 00:17:19.552 "data_offset": 0, 00:17:19.552 "data_size": 7936 00:17:19.552 }, 00:17:19.552 { 00:17:19.552 "name": "BaseBdev2", 00:17:19.552 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:19.552 "is_configured": true, 00:17:19.552 "data_offset": 256, 00:17:19.552 "data_size": 7936 00:17:19.552 } 00:17:19.552 ] 00:17:19.552 }' 00:17:19.552 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.552 20:30:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.812 "name": "raid_bdev1", 00:17:19.812 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:19.812 "strip_size_kb": 0, 00:17:19.812 "state": "online", 00:17:19.812 "raid_level": "raid1", 00:17:19.812 "superblock": true, 00:17:19.812 "num_base_bdevs": 2, 00:17:19.812 "num_base_bdevs_discovered": 1, 00:17:19.812 "num_base_bdevs_operational": 1, 00:17:19.812 "base_bdevs_list": [ 00:17:19.812 { 00:17:19.812 "name": null, 00:17:19.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.812 "is_configured": false, 00:17:19.812 "data_offset": 0, 00:17:19.812 "data_size": 7936 00:17:19.812 }, 00:17:19.812 { 00:17:19.812 "name": "BaseBdev2", 00:17:19.812 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:19.812 "is_configured": true, 00:17:19.812 "data_offset": 256, 00:17:19.812 "data_size": 7936 00:17:19.812 } 00:17:19.812 ] 00:17:19.812 }' 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.812 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.072 [2024-11-26 20:30:13.396646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.072 [2024-11-26 20:30:13.396947] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.072 [2024-11-26 20:30:13.397034] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:20.072 request: 00:17:20.072 { 00:17:20.072 "base_bdev": "BaseBdev1", 00:17:20.072 "raid_bdev": "raid_bdev1", 00:17:20.072 "method": "bdev_raid_add_base_bdev", 00:17:20.072 "req_id": 1 00:17:20.072 } 00:17:20.072 Got JSON-RPC error response 00:17:20.072 response: 00:17:20.072 { 00:17:20.072 "code": -22, 00:17:20.072 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:20.072 } 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.072 20:30:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.010 "name": "raid_bdev1", 00:17:21.010 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:21.010 "strip_size_kb": 0, 00:17:21.010 "state": "online", 00:17:21.010 "raid_level": "raid1", 00:17:21.010 "superblock": true, 00:17:21.010 "num_base_bdevs": 2, 00:17:21.010 "num_base_bdevs_discovered": 1, 00:17:21.010 "num_base_bdevs_operational": 1, 00:17:21.010 "base_bdevs_list": [ 00:17:21.010 { 00:17:21.010 "name": null, 00:17:21.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.010 "is_configured": false, 00:17:21.010 "data_offset": 0, 00:17:21.010 "data_size": 7936 00:17:21.010 }, 00:17:21.010 { 00:17:21.010 "name": "BaseBdev2", 00:17:21.010 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:21.010 "is_configured": true, 00:17:21.010 "data_offset": 256, 00:17:21.010 "data_size": 7936 00:17:21.010 } 00:17:21.010 ] 00:17:21.010 }' 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.010 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.579 "name": "raid_bdev1", 00:17:21.579 "uuid": "deebc597-6157-4cec-81f0-ba68cdd696c3", 00:17:21.579 "strip_size_kb": 0, 00:17:21.579 "state": "online", 00:17:21.579 "raid_level": "raid1", 00:17:21.579 "superblock": true, 00:17:21.579 "num_base_bdevs": 2, 00:17:21.579 "num_base_bdevs_discovered": 1, 00:17:21.579 "num_base_bdevs_operational": 1, 00:17:21.579 "base_bdevs_list": [ 00:17:21.579 { 00:17:21.579 "name": null, 00:17:21.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.579 "is_configured": false, 00:17:21.579 "data_offset": 0, 00:17:21.579 "data_size": 7936 00:17:21.579 }, 00:17:21.579 { 00:17:21.579 "name": "BaseBdev2", 00:17:21.579 "uuid": "7d40d764-e982-5f01-84ce-ee6970d9e024", 00:17:21.579 "is_configured": true, 00:17:21.579 "data_offset": 256, 00:17:21.579 "data_size": 7936 00:17:21.579 } 00:17:21.579 ] 00:17:21.579 }' 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.579 20:30:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97472 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 97472 ']' 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 97472 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97472 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97472' 00:17:21.579 killing process with pid 97472 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 97472 00:17:21.579 Received shutdown signal, test time was about 60.000000 seconds 00:17:21.579 00:17:21.579 Latency(us) 00:17:21.579 [2024-11-26T20:30:15.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.579 [2024-11-26T20:30:15.131Z] =================================================================================================================== 00:17:21.579 [2024-11-26T20:30:15.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.579 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 97472 00:17:21.579 [2024-11-26 20:30:15.089784] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.579 [2024-11-26 20:30:15.089950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.579 [2024-11-26 20:30:15.090017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.579 [2024-11-26 20:30:15.090028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:21.838 [2024-11-26 20:30:15.142707] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.097 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:22.097 ************************************ 00:17:22.097 END TEST raid_rebuild_test_sb_4k 00:17:22.097 ************************************ 00:17:22.097 00:17:22.097 real 0m19.261s 00:17:22.097 user 0m25.672s 00:17:22.097 sys 0m2.777s 00:17:22.097 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.097 20:30:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.097 20:30:15 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:22.097 20:30:15 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:22.097 20:30:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:22.097 20:30:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.097 20:30:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.097 ************************************ 00:17:22.097 START TEST raid_state_function_test_sb_md_separate 00:17:22.097 ************************************ 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:22.097 Process raid pid: 98163 00:17:22.097 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=98163 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98163' 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 98163 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98163 ']' 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.098 20:30:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.357 [2024-11-26 20:30:15.659316] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:22.357 [2024-11-26 20:30:15.659574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.357 [2024-11-26 20:30:15.814390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.357 [2024-11-26 20:30:15.898243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.615 [2024-11-26 20:30:15.974116] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.615 [2024-11-26 20:30:15.974159] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.184 [2024-11-26 20:30:16.556760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.184 [2024-11-26 20:30:16.556833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.184 [2024-11-26 20:30:16.556847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.184 [2024-11-26 20:30:16.556858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.184 "name": "Existed_Raid", 00:17:23.184 "uuid": "de8ee70b-ce9b-466d-80a1-109358b00177", 00:17:23.184 "strip_size_kb": 0, 00:17:23.184 "state": "configuring", 00:17:23.184 "raid_level": "raid1", 00:17:23.184 "superblock": true, 00:17:23.184 "num_base_bdevs": 2, 00:17:23.184 "num_base_bdevs_discovered": 0, 00:17:23.184 "num_base_bdevs_operational": 2, 00:17:23.184 "base_bdevs_list": [ 00:17:23.184 { 00:17:23.184 "name": "BaseBdev1", 00:17:23.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.184 "is_configured": false, 00:17:23.184 "data_offset": 0, 00:17:23.184 "data_size": 0 00:17:23.184 }, 00:17:23.184 { 00:17:23.184 "name": "BaseBdev2", 00:17:23.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.184 "is_configured": false, 00:17:23.184 "data_offset": 0, 00:17:23.184 "data_size": 0 00:17:23.184 } 00:17:23.184 ] 00:17:23.184 }' 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.184 20:30:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.753 [2024-11-26 20:30:17.031857] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.753 [2024-11-26 20:30:17.031933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.753 [2024-11-26 20:30:17.043876] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.753 [2024-11-26 20:30:17.044014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.753 [2024-11-26 20:30:17.044050] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.753 [2024-11-26 20:30:17.044079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.753 [2024-11-26 20:30:17.072365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.753 BaseBdev1 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.753 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.753 [ 00:17:23.753 { 00:17:23.753 "name": "BaseBdev1", 00:17:23.753 "aliases": [ 00:17:23.753 "252587ee-371d-49ad-b473-1881b3a7a5fe" 00:17:23.753 ], 00:17:23.753 "product_name": "Malloc disk", 00:17:23.753 "block_size": 4096, 00:17:23.753 "num_blocks": 8192, 00:17:23.753 "uuid": "252587ee-371d-49ad-b473-1881b3a7a5fe", 00:17:23.753 "md_size": 32, 00:17:23.753 "md_interleave": false, 00:17:23.753 "dif_type": 0, 00:17:23.753 "assigned_rate_limits": { 00:17:23.753 "rw_ios_per_sec": 0, 00:17:23.753 "rw_mbytes_per_sec": 0, 00:17:23.753 "r_mbytes_per_sec": 0, 00:17:23.753 "w_mbytes_per_sec": 0 00:17:23.753 }, 00:17:23.753 "claimed": true, 00:17:23.753 "claim_type": "exclusive_write", 00:17:23.753 "zoned": false, 00:17:23.753 "supported_io_types": { 00:17:23.753 "read": true, 00:17:23.753 "write": true, 00:17:23.753 "unmap": true, 00:17:23.753 "flush": true, 00:17:23.753 "reset": true, 00:17:23.753 "nvme_admin": false, 00:17:23.753 "nvme_io": false, 00:17:23.754 "nvme_io_md": false, 00:17:23.754 "write_zeroes": true, 00:17:23.754 "zcopy": true, 00:17:23.754 "get_zone_info": false, 00:17:23.754 "zone_management": false, 00:17:23.754 "zone_append": false, 00:17:23.754 "compare": false, 00:17:23.754 "compare_and_write": false, 00:17:23.754 "abort": true, 00:17:23.754 "seek_hole": false, 00:17:23.754 "seek_data": false, 00:17:23.754 "copy": true, 00:17:23.754 "nvme_iov_md": false 00:17:23.754 }, 00:17:23.754 "memory_domains": [ 00:17:23.754 { 00:17:23.754 "dma_device_id": "system", 00:17:23.754 "dma_device_type": 1 00:17:23.754 }, 00:17:23.754 { 00:17:23.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.754 "dma_device_type": 2 00:17:23.754 } 00:17:23.754 ], 00:17:23.754 "driver_specific": {} 00:17:23.754 } 00:17:23.754 ] 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.754 "name": "Existed_Raid", 00:17:23.754 "uuid": "fa7d32d5-535a-46b6-844d-62135049df5b", 00:17:23.754 "strip_size_kb": 0, 00:17:23.754 "state": "configuring", 00:17:23.754 "raid_level": "raid1", 00:17:23.754 "superblock": true, 00:17:23.754 "num_base_bdevs": 2, 00:17:23.754 "num_base_bdevs_discovered": 1, 00:17:23.754 "num_base_bdevs_operational": 2, 00:17:23.754 "base_bdevs_list": [ 00:17:23.754 { 00:17:23.754 "name": "BaseBdev1", 00:17:23.754 "uuid": "252587ee-371d-49ad-b473-1881b3a7a5fe", 00:17:23.754 "is_configured": true, 00:17:23.754 "data_offset": 256, 00:17:23.754 "data_size": 7936 00:17:23.754 }, 00:17:23.754 { 00:17:23.754 "name": "BaseBdev2", 00:17:23.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.754 "is_configured": false, 00:17:23.754 "data_offset": 0, 00:17:23.754 "data_size": 0 00:17:23.754 } 00:17:23.754 ] 00:17:23.754 }' 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.754 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.323 [2024-11-26 20:30:17.599611] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.323 [2024-11-26 20:30:17.599793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.323 [2024-11-26 20:30:17.611679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.323 [2024-11-26 20:30:17.613590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.323 [2024-11-26 20:30:17.613656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.323 "name": "Existed_Raid", 00:17:24.323 "uuid": "4a396b1b-f836-4fc5-bbd3-aba0e1cb2d8b", 00:17:24.323 "strip_size_kb": 0, 00:17:24.323 "state": "configuring", 00:17:24.323 "raid_level": "raid1", 00:17:24.323 "superblock": true, 00:17:24.323 "num_base_bdevs": 2, 00:17:24.323 "num_base_bdevs_discovered": 1, 00:17:24.323 "num_base_bdevs_operational": 2, 00:17:24.323 "base_bdevs_list": [ 00:17:24.323 { 00:17:24.323 "name": "BaseBdev1", 00:17:24.323 "uuid": "252587ee-371d-49ad-b473-1881b3a7a5fe", 00:17:24.323 "is_configured": true, 00:17:24.323 "data_offset": 256, 00:17:24.323 "data_size": 7936 00:17:24.323 }, 00:17:24.323 { 00:17:24.323 "name": "BaseBdev2", 00:17:24.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.323 "is_configured": false, 00:17:24.323 "data_offset": 0, 00:17:24.323 "data_size": 0 00:17:24.323 } 00:17:24.323 ] 00:17:24.323 }' 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.323 20:30:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:24.582 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.582 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 [2024-11-26 20:30:18.121559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.582 [2024-11-26 20:30:18.122039] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:24.583 [2024-11-26 20:30:18.122132] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.583 [2024-11-26 20:30:18.122345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:24.583 BaseBdev2 00:17:24.583 [2024-11-26 20:30:18.122595] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:24.583 [2024-11-26 20:30:18.122646] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:17:24.583 [2024-11-26 20:30:18.122790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.583 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 [ 00:17:24.842 { 00:17:24.842 "name": "BaseBdev2", 00:17:24.842 "aliases": [ 00:17:24.842 "9b275ef7-66c8-40fe-a27b-ab4cdb557664" 00:17:24.842 ], 00:17:24.842 "product_name": "Malloc disk", 00:17:24.842 "block_size": 4096, 00:17:24.842 "num_blocks": 8192, 00:17:24.842 "uuid": "9b275ef7-66c8-40fe-a27b-ab4cdb557664", 00:17:24.842 "md_size": 32, 00:17:24.842 "md_interleave": false, 00:17:24.842 "dif_type": 0, 00:17:24.842 "assigned_rate_limits": { 00:17:24.842 "rw_ios_per_sec": 0, 00:17:24.842 "rw_mbytes_per_sec": 0, 00:17:24.842 "r_mbytes_per_sec": 0, 00:17:24.842 "w_mbytes_per_sec": 0 00:17:24.842 }, 00:17:24.842 "claimed": true, 00:17:24.842 "claim_type": "exclusive_write", 00:17:24.842 "zoned": false, 00:17:24.842 "supported_io_types": { 00:17:24.842 "read": true, 00:17:24.842 "write": true, 00:17:24.842 "unmap": true, 00:17:24.842 "flush": true, 00:17:24.842 "reset": true, 00:17:24.842 "nvme_admin": false, 00:17:24.842 "nvme_io": false, 00:17:24.842 "nvme_io_md": false, 00:17:24.842 "write_zeroes": true, 00:17:24.842 "zcopy": true, 00:17:24.842 "get_zone_info": false, 00:17:24.842 "zone_management": false, 00:17:24.842 "zone_append": false, 00:17:24.842 "compare": false, 00:17:24.842 "compare_and_write": false, 00:17:24.842 "abort": true, 00:17:24.842 "seek_hole": false, 00:17:24.842 "seek_data": false, 00:17:24.842 "copy": true, 00:17:24.842 "nvme_iov_md": false 00:17:24.842 }, 00:17:24.842 "memory_domains": [ 00:17:24.842 { 00:17:24.842 "dma_device_id": "system", 00:17:24.842 "dma_device_type": 1 00:17:24.842 }, 00:17:24.842 { 00:17:24.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.842 "dma_device_type": 2 00:17:24.842 } 00:17:24.842 ], 00:17:24.842 "driver_specific": {} 00:17:24.842 } 00:17:24.842 ] 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.842 "name": "Existed_Raid", 00:17:24.842 "uuid": "4a396b1b-f836-4fc5-bbd3-aba0e1cb2d8b", 00:17:24.842 "strip_size_kb": 0, 00:17:24.842 "state": "online", 00:17:24.842 "raid_level": "raid1", 00:17:24.842 "superblock": true, 00:17:24.842 "num_base_bdevs": 2, 00:17:24.842 "num_base_bdevs_discovered": 2, 00:17:24.842 "num_base_bdevs_operational": 2, 00:17:24.842 "base_bdevs_list": [ 00:17:24.842 { 00:17:24.842 "name": "BaseBdev1", 00:17:24.842 "uuid": "252587ee-371d-49ad-b473-1881b3a7a5fe", 00:17:24.842 "is_configured": true, 00:17:24.842 "data_offset": 256, 00:17:24.842 "data_size": 7936 00:17:24.842 }, 00:17:24.842 { 00:17:24.842 "name": "BaseBdev2", 00:17:24.842 "uuid": "9b275ef7-66c8-40fe-a27b-ab4cdb557664", 00:17:24.842 "is_configured": true, 00:17:24.842 "data_offset": 256, 00:17:24.842 "data_size": 7936 00:17:24.842 } 00:17:24.842 ] 00:17:24.842 }' 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.842 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.101 [2024-11-26 20:30:18.617200] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.101 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.360 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:25.360 "name": "Existed_Raid", 00:17:25.360 "aliases": [ 00:17:25.360 "4a396b1b-f836-4fc5-bbd3-aba0e1cb2d8b" 00:17:25.360 ], 00:17:25.360 "product_name": "Raid Volume", 00:17:25.360 "block_size": 4096, 00:17:25.360 "num_blocks": 7936, 00:17:25.360 "uuid": "4a396b1b-f836-4fc5-bbd3-aba0e1cb2d8b", 00:17:25.360 "md_size": 32, 00:17:25.360 "md_interleave": false, 00:17:25.360 "dif_type": 0, 00:17:25.360 "assigned_rate_limits": { 00:17:25.360 "rw_ios_per_sec": 0, 00:17:25.360 "rw_mbytes_per_sec": 0, 00:17:25.360 "r_mbytes_per_sec": 0, 00:17:25.360 "w_mbytes_per_sec": 0 00:17:25.360 }, 00:17:25.360 "claimed": false, 00:17:25.360 "zoned": false, 00:17:25.360 "supported_io_types": { 00:17:25.360 "read": true, 00:17:25.360 "write": true, 00:17:25.360 "unmap": false, 00:17:25.360 "flush": false, 00:17:25.360 "reset": true, 00:17:25.360 "nvme_admin": false, 00:17:25.360 "nvme_io": false, 00:17:25.360 "nvme_io_md": false, 00:17:25.360 "write_zeroes": true, 00:17:25.360 "zcopy": false, 00:17:25.360 "get_zone_info": false, 00:17:25.360 "zone_management": false, 00:17:25.360 "zone_append": false, 00:17:25.361 "compare": false, 00:17:25.361 "compare_and_write": false, 00:17:25.361 "abort": false, 00:17:25.361 "seek_hole": false, 00:17:25.361 "seek_data": false, 00:17:25.361 "copy": false, 00:17:25.361 "nvme_iov_md": false 00:17:25.361 }, 00:17:25.361 "memory_domains": [ 00:17:25.361 { 00:17:25.361 "dma_device_id": "system", 00:17:25.361 "dma_device_type": 1 00:17:25.361 }, 00:17:25.361 { 00:17:25.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.361 "dma_device_type": 2 00:17:25.361 }, 00:17:25.361 { 00:17:25.361 "dma_device_id": "system", 00:17:25.361 "dma_device_type": 1 00:17:25.361 }, 00:17:25.361 { 00:17:25.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.361 "dma_device_type": 2 00:17:25.361 } 00:17:25.361 ], 00:17:25.361 "driver_specific": { 00:17:25.361 "raid": { 00:17:25.361 "uuid": "4a396b1b-f836-4fc5-bbd3-aba0e1cb2d8b", 00:17:25.361 "strip_size_kb": 0, 00:17:25.361 "state": "online", 00:17:25.361 "raid_level": "raid1", 00:17:25.361 "superblock": true, 00:17:25.361 "num_base_bdevs": 2, 00:17:25.361 "num_base_bdevs_discovered": 2, 00:17:25.361 "num_base_bdevs_operational": 2, 00:17:25.361 "base_bdevs_list": [ 00:17:25.361 { 00:17:25.361 "name": "BaseBdev1", 00:17:25.361 "uuid": "252587ee-371d-49ad-b473-1881b3a7a5fe", 00:17:25.361 "is_configured": true, 00:17:25.361 "data_offset": 256, 00:17:25.361 "data_size": 7936 00:17:25.361 }, 00:17:25.361 { 00:17:25.361 "name": "BaseBdev2", 00:17:25.361 "uuid": "9b275ef7-66c8-40fe-a27b-ab4cdb557664", 00:17:25.361 "is_configured": true, 00:17:25.361 "data_offset": 256, 00:17:25.361 "data_size": 7936 00:17:25.361 } 00:17:25.361 ] 00:17:25.361 } 00:17:25.361 } 00:17:25.361 }' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:25.361 BaseBdev2' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.361 [2024-11-26 20:30:18.852584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.361 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.620 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.620 "name": "Existed_Raid", 00:17:25.620 "uuid": "4a396b1b-f836-4fc5-bbd3-aba0e1cb2d8b", 00:17:25.620 "strip_size_kb": 0, 00:17:25.620 "state": "online", 00:17:25.620 "raid_level": "raid1", 00:17:25.620 "superblock": true, 00:17:25.620 "num_base_bdevs": 2, 00:17:25.620 "num_base_bdevs_discovered": 1, 00:17:25.620 "num_base_bdevs_operational": 1, 00:17:25.620 "base_bdevs_list": [ 00:17:25.620 { 00:17:25.620 "name": null, 00:17:25.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.620 "is_configured": false, 00:17:25.620 "data_offset": 0, 00:17:25.620 "data_size": 7936 00:17:25.620 }, 00:17:25.620 { 00:17:25.620 "name": "BaseBdev2", 00:17:25.620 "uuid": "9b275ef7-66c8-40fe-a27b-ab4cdb557664", 00:17:25.620 "is_configured": true, 00:17:25.620 "data_offset": 256, 00:17:25.620 "data_size": 7936 00:17:25.620 } 00:17:25.620 ] 00:17:25.620 }' 00:17:25.620 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.620 20:30:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.886 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.887 [2024-11-26 20:30:19.414510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:25.887 [2024-11-26 20:30:19.414771] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.887 [2024-11-26 20:30:19.429304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.887 [2024-11-26 20:30:19.429358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.887 [2024-11-26 20:30:19.429378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:25.887 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 98163 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98163 ']' 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98163 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:26.150 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.151 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98163 00:17:26.151 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:26.151 killing process with pid 98163 00:17:26.151 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:26.151 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98163' 00:17:26.151 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98163 00:17:26.151 [2024-11-26 20:30:19.524412] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.151 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98163 00:17:26.151 [2024-11-26 20:30:19.526066] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:26.409 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:26.409 00:17:26.409 real 0m4.325s 00:17:26.409 user 0m6.661s 00:17:26.409 sys 0m0.986s 00:17:26.409 ************************************ 00:17:26.409 END TEST raid_state_function_test_sb_md_separate 00:17:26.409 ************************************ 00:17:26.409 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:26.409 20:30:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.409 20:30:19 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:26.409 20:30:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:26.409 20:30:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:26.409 20:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.409 ************************************ 00:17:26.409 START TEST raid_superblock_test_md_separate 00:17:26.409 ************************************ 00:17:26.409 20:30:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:26.409 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:26.409 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:26.409 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=98405 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 98405 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98405 ']' 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.410 20:30:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.669 20:30:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.669 20:30:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.669 [2024-11-26 20:30:20.047034] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:26.669 [2024-11-26 20:30:20.047183] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98405 ] 00:17:26.669 [2024-11-26 20:30:20.211289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.928 [2024-11-26 20:30:20.294139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.928 [2024-11-26 20:30:20.367329] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.928 [2024-11-26 20:30:20.367474] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.497 malloc1 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.497 [2024-11-26 20:30:20.959474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.497 [2024-11-26 20:30:20.959658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.497 [2024-11-26 20:30:20.959717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:27.497 [2024-11-26 20:30:20.959778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.497 [2024-11-26 20:30:20.962187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.497 [2024-11-26 20:30:20.962276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.497 pt1 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.497 20:30:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.497 malloc2 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.497 [2024-11-26 20:30:21.008349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:27.497 [2024-11-26 20:30:21.008444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.497 [2024-11-26 20:30:21.008464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:27.497 [2024-11-26 20:30:21.008478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.497 [2024-11-26 20:30:21.010857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.497 [2024-11-26 20:30:21.010988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:27.497 pt2 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.497 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.497 [2024-11-26 20:30:21.020361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.497 [2024-11-26 20:30:21.022658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:27.497 [2024-11-26 20:30:21.022837] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:27.497 [2024-11-26 20:30:21.022856] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:27.497 [2024-11-26 20:30:21.022965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:27.497 [2024-11-26 20:30:21.023080] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:27.497 [2024-11-26 20:30:21.023093] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:27.498 [2024-11-26 20:30:21.023206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.498 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.759 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.759 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.759 "name": "raid_bdev1", 00:17:27.759 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:27.759 "strip_size_kb": 0, 00:17:27.759 "state": "online", 00:17:27.759 "raid_level": "raid1", 00:17:27.759 "superblock": true, 00:17:27.759 "num_base_bdevs": 2, 00:17:27.759 "num_base_bdevs_discovered": 2, 00:17:27.759 "num_base_bdevs_operational": 2, 00:17:27.759 "base_bdevs_list": [ 00:17:27.759 { 00:17:27.759 "name": "pt1", 00:17:27.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:27.759 "is_configured": true, 00:17:27.759 "data_offset": 256, 00:17:27.759 "data_size": 7936 00:17:27.759 }, 00:17:27.759 { 00:17:27.759 "name": "pt2", 00:17:27.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.759 "is_configured": true, 00:17:27.759 "data_offset": 256, 00:17:27.759 "data_size": 7936 00:17:27.759 } 00:17:27.759 ] 00:17:27.759 }' 00:17:27.759 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.759 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.023 [2024-11-26 20:30:21.519967] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:28.023 "name": "raid_bdev1", 00:17:28.023 "aliases": [ 00:17:28.023 "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6" 00:17:28.023 ], 00:17:28.023 "product_name": "Raid Volume", 00:17:28.023 "block_size": 4096, 00:17:28.023 "num_blocks": 7936, 00:17:28.023 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:28.023 "md_size": 32, 00:17:28.023 "md_interleave": false, 00:17:28.023 "dif_type": 0, 00:17:28.023 "assigned_rate_limits": { 00:17:28.023 "rw_ios_per_sec": 0, 00:17:28.023 "rw_mbytes_per_sec": 0, 00:17:28.023 "r_mbytes_per_sec": 0, 00:17:28.023 "w_mbytes_per_sec": 0 00:17:28.023 }, 00:17:28.023 "claimed": false, 00:17:28.023 "zoned": false, 00:17:28.023 "supported_io_types": { 00:17:28.023 "read": true, 00:17:28.023 "write": true, 00:17:28.023 "unmap": false, 00:17:28.023 "flush": false, 00:17:28.023 "reset": true, 00:17:28.023 "nvme_admin": false, 00:17:28.023 "nvme_io": false, 00:17:28.023 "nvme_io_md": false, 00:17:28.023 "write_zeroes": true, 00:17:28.023 "zcopy": false, 00:17:28.023 "get_zone_info": false, 00:17:28.023 "zone_management": false, 00:17:28.023 "zone_append": false, 00:17:28.023 "compare": false, 00:17:28.023 "compare_and_write": false, 00:17:28.023 "abort": false, 00:17:28.023 "seek_hole": false, 00:17:28.023 "seek_data": false, 00:17:28.023 "copy": false, 00:17:28.023 "nvme_iov_md": false 00:17:28.023 }, 00:17:28.023 "memory_domains": [ 00:17:28.023 { 00:17:28.023 "dma_device_id": "system", 00:17:28.023 "dma_device_type": 1 00:17:28.023 }, 00:17:28.023 { 00:17:28.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.023 "dma_device_type": 2 00:17:28.023 }, 00:17:28.023 { 00:17:28.023 "dma_device_id": "system", 00:17:28.023 "dma_device_type": 1 00:17:28.023 }, 00:17:28.023 { 00:17:28.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.023 "dma_device_type": 2 00:17:28.023 } 00:17:28.023 ], 00:17:28.023 "driver_specific": { 00:17:28.023 "raid": { 00:17:28.023 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:28.023 "strip_size_kb": 0, 00:17:28.023 "state": "online", 00:17:28.023 "raid_level": "raid1", 00:17:28.023 "superblock": true, 00:17:28.023 "num_base_bdevs": 2, 00:17:28.023 "num_base_bdevs_discovered": 2, 00:17:28.023 "num_base_bdevs_operational": 2, 00:17:28.023 "base_bdevs_list": [ 00:17:28.023 { 00:17:28.023 "name": "pt1", 00:17:28.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.023 "is_configured": true, 00:17:28.023 "data_offset": 256, 00:17:28.023 "data_size": 7936 00:17:28.023 }, 00:17:28.023 { 00:17:28.023 "name": "pt2", 00:17:28.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.023 "is_configured": true, 00:17:28.023 "data_offset": 256, 00:17:28.023 "data_size": 7936 00:17:28.023 } 00:17:28.023 ] 00:17:28.023 } 00:17:28.023 } 00:17:28.023 }' 00:17:28.023 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:28.282 pt2' 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.282 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:28.283 [2024-11-26 20:30:21.771375] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6b7712d4-9f58-4d37-bb51-ea629b0ad5b6 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 6b7712d4-9f58-4d37-bb51-ea629b0ad5b6 ']' 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.283 [2024-11-26 20:30:21.807074] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.283 [2024-11-26 20:30:21.807122] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.283 [2024-11-26 20:30:21.807218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.283 [2024-11-26 20:30:21.807283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.283 [2024-11-26 20:30:21.807295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.283 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.542 [2024-11-26 20:30:21.954872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:28.542 [2024-11-26 20:30:21.957189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:28.542 [2024-11-26 20:30:21.957329] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:28.542 [2024-11-26 20:30:21.957449] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:28.542 [2024-11-26 20:30:21.957521] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.542 [2024-11-26 20:30:21.957559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:17:28.542 request: 00:17:28.542 { 00:17:28.542 "name": "raid_bdev1", 00:17:28.542 "raid_level": "raid1", 00:17:28.542 "base_bdevs": [ 00:17:28.542 "malloc1", 00:17:28.542 "malloc2" 00:17:28.542 ], 00:17:28.542 "superblock": false, 00:17:28.542 "method": "bdev_raid_create", 00:17:28.542 "req_id": 1 00:17:28.542 } 00:17:28.542 Got JSON-RPC error response 00:17:28.542 response: 00:17:28.542 { 00:17:28.542 "code": -17, 00:17:28.542 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:28.542 } 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.542 20:30:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.542 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:28.542 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:28.542 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:28.542 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.542 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.542 [2024-11-26 20:30:22.018706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:28.542 [2024-11-26 20:30:22.018798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.542 [2024-11-26 20:30:22.018820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:28.542 [2024-11-26 20:30:22.018831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.542 [2024-11-26 20:30:22.020930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.542 [2024-11-26 20:30:22.020983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:28.542 [2024-11-26 20:30:22.021055] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:28.542 [2024-11-26 20:30:22.021092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:28.542 pt1 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.543 "name": "raid_bdev1", 00:17:28.543 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:28.543 "strip_size_kb": 0, 00:17:28.543 "state": "configuring", 00:17:28.543 "raid_level": "raid1", 00:17:28.543 "superblock": true, 00:17:28.543 "num_base_bdevs": 2, 00:17:28.543 "num_base_bdevs_discovered": 1, 00:17:28.543 "num_base_bdevs_operational": 2, 00:17:28.543 "base_bdevs_list": [ 00:17:28.543 { 00:17:28.543 "name": "pt1", 00:17:28.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.543 "is_configured": true, 00:17:28.543 "data_offset": 256, 00:17:28.543 "data_size": 7936 00:17:28.543 }, 00:17:28.543 { 00:17:28.543 "name": null, 00:17:28.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.543 "is_configured": false, 00:17:28.543 "data_offset": 256, 00:17:28.543 "data_size": 7936 00:17:28.543 } 00:17:28.543 ] 00:17:28.543 }' 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.543 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.111 [2024-11-26 20:30:22.497928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.111 [2024-11-26 20:30:22.498127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.111 [2024-11-26 20:30:22.498185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:29.111 [2024-11-26 20:30:22.498229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.111 [2024-11-26 20:30:22.498466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.111 [2024-11-26 20:30:22.498520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.111 [2024-11-26 20:30:22.498607] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:29.111 [2024-11-26 20:30:22.498671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.111 [2024-11-26 20:30:22.498795] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:29.111 [2024-11-26 20:30:22.498848] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:29.111 [2024-11-26 20:30:22.498949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:29.111 [2024-11-26 20:30:22.499075] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:29.111 [2024-11-26 20:30:22.499122] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:17:29.111 [2024-11-26 20:30:22.499238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.111 pt2 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.111 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.112 "name": "raid_bdev1", 00:17:29.112 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:29.112 "strip_size_kb": 0, 00:17:29.112 "state": "online", 00:17:29.112 "raid_level": "raid1", 00:17:29.112 "superblock": true, 00:17:29.112 "num_base_bdevs": 2, 00:17:29.112 "num_base_bdevs_discovered": 2, 00:17:29.112 "num_base_bdevs_operational": 2, 00:17:29.112 "base_bdevs_list": [ 00:17:29.112 { 00:17:29.112 "name": "pt1", 00:17:29.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.112 "is_configured": true, 00:17:29.112 "data_offset": 256, 00:17:29.112 "data_size": 7936 00:17:29.112 }, 00:17:29.112 { 00:17:29.112 "name": "pt2", 00:17:29.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.112 "is_configured": true, 00:17:29.112 "data_offset": 256, 00:17:29.112 "data_size": 7936 00:17:29.112 } 00:17:29.112 ] 00:17:29.112 }' 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.112 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.679 20:30:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.679 [2024-11-26 20:30:22.993530] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.679 "name": "raid_bdev1", 00:17:29.679 "aliases": [ 00:17:29.679 "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6" 00:17:29.679 ], 00:17:29.679 "product_name": "Raid Volume", 00:17:29.679 "block_size": 4096, 00:17:29.679 "num_blocks": 7936, 00:17:29.679 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:29.679 "md_size": 32, 00:17:29.679 "md_interleave": false, 00:17:29.679 "dif_type": 0, 00:17:29.679 "assigned_rate_limits": { 00:17:29.679 "rw_ios_per_sec": 0, 00:17:29.679 "rw_mbytes_per_sec": 0, 00:17:29.679 "r_mbytes_per_sec": 0, 00:17:29.679 "w_mbytes_per_sec": 0 00:17:29.679 }, 00:17:29.679 "claimed": false, 00:17:29.679 "zoned": false, 00:17:29.679 "supported_io_types": { 00:17:29.679 "read": true, 00:17:29.679 "write": true, 00:17:29.679 "unmap": false, 00:17:29.679 "flush": false, 00:17:29.679 "reset": true, 00:17:29.679 "nvme_admin": false, 00:17:29.679 "nvme_io": false, 00:17:29.679 "nvme_io_md": false, 00:17:29.679 "write_zeroes": true, 00:17:29.679 "zcopy": false, 00:17:29.679 "get_zone_info": false, 00:17:29.679 "zone_management": false, 00:17:29.679 "zone_append": false, 00:17:29.679 "compare": false, 00:17:29.679 "compare_and_write": false, 00:17:29.679 "abort": false, 00:17:29.679 "seek_hole": false, 00:17:29.679 "seek_data": false, 00:17:29.679 "copy": false, 00:17:29.679 "nvme_iov_md": false 00:17:29.679 }, 00:17:29.679 "memory_domains": [ 00:17:29.679 { 00:17:29.679 "dma_device_id": "system", 00:17:29.679 "dma_device_type": 1 00:17:29.679 }, 00:17:29.679 { 00:17:29.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.679 "dma_device_type": 2 00:17:29.679 }, 00:17:29.679 { 00:17:29.679 "dma_device_id": "system", 00:17:29.679 "dma_device_type": 1 00:17:29.679 }, 00:17:29.679 { 00:17:29.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.679 "dma_device_type": 2 00:17:29.679 } 00:17:29.679 ], 00:17:29.679 "driver_specific": { 00:17:29.679 "raid": { 00:17:29.679 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:29.679 "strip_size_kb": 0, 00:17:29.679 "state": "online", 00:17:29.679 "raid_level": "raid1", 00:17:29.679 "superblock": true, 00:17:29.679 "num_base_bdevs": 2, 00:17:29.679 "num_base_bdevs_discovered": 2, 00:17:29.679 "num_base_bdevs_operational": 2, 00:17:29.679 "base_bdevs_list": [ 00:17:29.679 { 00:17:29.679 "name": "pt1", 00:17:29.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.679 "is_configured": true, 00:17:29.679 "data_offset": 256, 00:17:29.679 "data_size": 7936 00:17:29.679 }, 00:17:29.679 { 00:17:29.679 "name": "pt2", 00:17:29.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.679 "is_configured": true, 00:17:29.679 "data_offset": 256, 00:17:29.679 "data_size": 7936 00:17:29.679 } 00:17:29.679 ] 00:17:29.679 } 00:17:29.679 } 00:17:29.679 }' 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:29.679 pt2' 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.679 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.938 [2024-11-26 20:30:23.249157] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 6b7712d4-9f58-4d37-bb51-ea629b0ad5b6 '!=' 6b7712d4-9f58-4d37-bb51-ea629b0ad5b6 ']' 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.938 [2024-11-26 20:30:23.296793] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.938 "name": "raid_bdev1", 00:17:29.938 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:29.938 "strip_size_kb": 0, 00:17:29.938 "state": "online", 00:17:29.938 "raid_level": "raid1", 00:17:29.938 "superblock": true, 00:17:29.938 "num_base_bdevs": 2, 00:17:29.938 "num_base_bdevs_discovered": 1, 00:17:29.938 "num_base_bdevs_operational": 1, 00:17:29.938 "base_bdevs_list": [ 00:17:29.938 { 00:17:29.938 "name": null, 00:17:29.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.938 "is_configured": false, 00:17:29.938 "data_offset": 0, 00:17:29.938 "data_size": 7936 00:17:29.938 }, 00:17:29.938 { 00:17:29.938 "name": "pt2", 00:17:29.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.938 "is_configured": true, 00:17:29.938 "data_offset": 256, 00:17:29.938 "data_size": 7936 00:17:29.938 } 00:17:29.938 ] 00:17:29.938 }' 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.938 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.506 [2024-11-26 20:30:23.775885] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.506 [2024-11-26 20:30:23.776023] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.506 [2024-11-26 20:30:23.776142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.506 [2024-11-26 20:30:23.776236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.506 [2024-11-26 20:30:23.776289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.506 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.506 [2024-11-26 20:30:23.847765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.506 [2024-11-26 20:30:23.847848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.506 [2024-11-26 20:30:23.847871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:30.506 [2024-11-26 20:30:23.847882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.507 [2024-11-26 20:30:23.850257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.507 [2024-11-26 20:30:23.850304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.507 [2024-11-26 20:30:23.850371] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:30.507 [2024-11-26 20:30:23.850406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.507 [2024-11-26 20:30:23.850485] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:17:30.507 [2024-11-26 20:30:23.850495] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.507 [2024-11-26 20:30:23.850580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:30.507 [2024-11-26 20:30:23.850693] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:17:30.507 [2024-11-26 20:30:23.850707] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:17:30.507 [2024-11-26 20:30:23.850789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.507 pt2 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.507 "name": "raid_bdev1", 00:17:30.507 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:30.507 "strip_size_kb": 0, 00:17:30.507 "state": "online", 00:17:30.507 "raid_level": "raid1", 00:17:30.507 "superblock": true, 00:17:30.507 "num_base_bdevs": 2, 00:17:30.507 "num_base_bdevs_discovered": 1, 00:17:30.507 "num_base_bdevs_operational": 1, 00:17:30.507 "base_bdevs_list": [ 00:17:30.507 { 00:17:30.507 "name": null, 00:17:30.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.507 "is_configured": false, 00:17:30.507 "data_offset": 256, 00:17:30.507 "data_size": 7936 00:17:30.507 }, 00:17:30.507 { 00:17:30.507 "name": "pt2", 00:17:30.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.507 "is_configured": true, 00:17:30.507 "data_offset": 256, 00:17:30.507 "data_size": 7936 00:17:30.507 } 00:17:30.507 ] 00:17:30.507 }' 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.507 20:30:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.767 [2024-11-26 20:30:24.287087] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.767 [2024-11-26 20:30:24.287220] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.767 [2024-11-26 20:30:24.287333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.767 [2024-11-26 20:30:24.287402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.767 [2024-11-26 20:30:24.287466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:30.767 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.027 [2024-11-26 20:30:24.350957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.027 [2024-11-26 20:30:24.351058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.027 [2024-11-26 20:30:24.351081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:31.027 [2024-11-26 20:30:24.351100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.027 [2024-11-26 20:30:24.353459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.027 [2024-11-26 20:30:24.353507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.027 [2024-11-26 20:30:24.353571] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:31.027 [2024-11-26 20:30:24.353636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.027 [2024-11-26 20:30:24.353796] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:31.027 [2024-11-26 20:30:24.353819] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.027 [2024-11-26 20:30:24.353846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:17:31.027 [2024-11-26 20:30:24.353887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.027 [2024-11-26 20:30:24.353963] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:31.027 [2024-11-26 20:30:24.353976] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.027 [2024-11-26 20:30:24.354062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:31.027 [2024-11-26 20:30:24.354154] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:31.027 [2024-11-26 20:30:24.354163] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:31.027 [2024-11-26 20:30:24.354254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.027 pt1 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.027 "name": "raid_bdev1", 00:17:31.027 "uuid": "6b7712d4-9f58-4d37-bb51-ea629b0ad5b6", 00:17:31.027 "strip_size_kb": 0, 00:17:31.027 "state": "online", 00:17:31.027 "raid_level": "raid1", 00:17:31.027 "superblock": true, 00:17:31.027 "num_base_bdevs": 2, 00:17:31.027 "num_base_bdevs_discovered": 1, 00:17:31.027 "num_base_bdevs_operational": 1, 00:17:31.027 "base_bdevs_list": [ 00:17:31.027 { 00:17:31.027 "name": null, 00:17:31.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.027 "is_configured": false, 00:17:31.027 "data_offset": 256, 00:17:31.027 "data_size": 7936 00:17:31.027 }, 00:17:31.027 { 00:17:31.027 "name": "pt2", 00:17:31.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.027 "is_configured": true, 00:17:31.027 "data_offset": 256, 00:17:31.027 "data_size": 7936 00:17:31.027 } 00:17:31.027 ] 00:17:31.027 }' 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.027 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.288 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:31.288 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.288 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.288 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:31.288 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.548 [2024-11-26 20:30:24.874390] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 6b7712d4-9f58-4d37-bb51-ea629b0ad5b6 '!=' 6b7712d4-9f58-4d37-bb51-ea629b0ad5b6 ']' 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 98405 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98405 ']' 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 98405 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98405 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98405' 00:17:31.548 killing process with pid 98405 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 98405 00:17:31.548 [2024-11-26 20:30:24.955982] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.548 [2024-11-26 20:30:24.956095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.548 20:30:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 98405 00:17:31.548 [2024-11-26 20:30:24.956152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.549 [2024-11-26 20:30:24.956164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:31.549 [2024-11-26 20:30:24.994479] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.808 ************************************ 00:17:31.808 END TEST raid_superblock_test_md_separate 00:17:31.808 ************************************ 00:17:31.808 20:30:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:31.808 00:17:31.808 real 0m5.397s 00:17:31.808 user 0m8.706s 00:17:31.808 sys 0m1.196s 00:17:31.808 20:30:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:31.808 20:30:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.093 20:30:25 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:32.093 20:30:25 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:32.093 20:30:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:32.093 20:30:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.093 20:30:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.093 ************************************ 00:17:32.093 START TEST raid_rebuild_test_sb_md_separate 00:17:32.093 ************************************ 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98720 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98720 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98720 ']' 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.094 20:30:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.094 [2024-11-26 20:30:25.526848] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:32.094 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:32.094 Zero copy mechanism will not be used. 00:17:32.094 [2024-11-26 20:30:25.527072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98720 ] 00:17:32.389 [2024-11-26 20:30:25.680544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.389 [2024-11-26 20:30:25.768359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.389 [2024-11-26 20:30:25.843195] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.389 [2024-11-26 20:30:25.843329] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.957 BaseBdev1_malloc 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.957 [2024-11-26 20:30:26.450074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:32.957 [2024-11-26 20:30:26.450264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.957 [2024-11-26 20:30:26.450320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:32.957 [2024-11-26 20:30:26.450364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.957 [2024-11-26 20:30:26.452755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.957 [2024-11-26 20:30:26.452842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:32.957 BaseBdev1 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.957 BaseBdev2_malloc 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.957 [2024-11-26 20:30:26.496694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:32.957 [2024-11-26 20:30:26.496875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.957 [2024-11-26 20:30:26.496931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:32.957 [2024-11-26 20:30:26.496978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.957 [2024-11-26 20:30:26.499395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.957 [2024-11-26 20:30:26.499479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:32.957 BaseBdev2 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.957 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.216 spare_malloc 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.216 spare_delay 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.216 [2024-11-26 20:30:26.542593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:33.216 [2024-11-26 20:30:26.542688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.216 [2024-11-26 20:30:26.542719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:33.216 [2024-11-26 20:30:26.542732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.216 [2024-11-26 20:30:26.544946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.216 [2024-11-26 20:30:26.545092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:33.216 spare 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.216 [2024-11-26 20:30:26.554587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.216 [2024-11-26 20:30:26.556465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.216 [2024-11-26 20:30:26.556701] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:33.216 [2024-11-26 20:30:26.556718] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:33.216 [2024-11-26 20:30:26.556813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:33.216 [2024-11-26 20:30:26.556932] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:33.216 [2024-11-26 20:30:26.556943] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:33.216 [2024-11-26 20:30:26.557056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.216 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.216 "name": "raid_bdev1", 00:17:33.216 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:33.216 "strip_size_kb": 0, 00:17:33.216 "state": "online", 00:17:33.216 "raid_level": "raid1", 00:17:33.216 "superblock": true, 00:17:33.216 "num_base_bdevs": 2, 00:17:33.216 "num_base_bdevs_discovered": 2, 00:17:33.216 "num_base_bdevs_operational": 2, 00:17:33.216 "base_bdevs_list": [ 00:17:33.216 { 00:17:33.216 "name": "BaseBdev1", 00:17:33.216 "uuid": "bf398175-0043-5946-ac90-e9889741e4e1", 00:17:33.216 "is_configured": true, 00:17:33.216 "data_offset": 256, 00:17:33.216 "data_size": 7936 00:17:33.216 }, 00:17:33.216 { 00:17:33.216 "name": "BaseBdev2", 00:17:33.216 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:33.216 "is_configured": true, 00:17:33.217 "data_offset": 256, 00:17:33.217 "data_size": 7936 00:17:33.217 } 00:17:33.217 ] 00:17:33.217 }' 00:17:33.217 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.217 20:30:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.475 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:33.475 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.475 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.475 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.475 [2024-11-26 20:30:27.026164] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:33.734 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:33.994 [2024-11-26 20:30:27.329373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:33.994 /dev/nbd0 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.994 1+0 records in 00:17:33.994 1+0 records out 00:17:33.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485757 s, 8.4 MB/s 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:33.994 20:30:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:34.931 7936+0 records in 00:17:34.931 7936+0 records out 00:17:34.931 32505856 bytes (33 MB, 31 MiB) copied, 0.733774 s, 44.3 MB/s 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.931 [2024-11-26 20:30:28.415527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.931 [2024-11-26 20:30:28.459568] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.931 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.190 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.190 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.190 "name": "raid_bdev1", 00:17:35.190 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:35.190 "strip_size_kb": 0, 00:17:35.190 "state": "online", 00:17:35.190 "raid_level": "raid1", 00:17:35.190 "superblock": true, 00:17:35.190 "num_base_bdevs": 2, 00:17:35.190 "num_base_bdevs_discovered": 1, 00:17:35.190 "num_base_bdevs_operational": 1, 00:17:35.190 "base_bdevs_list": [ 00:17:35.190 { 00:17:35.190 "name": null, 00:17:35.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.190 "is_configured": false, 00:17:35.190 "data_offset": 0, 00:17:35.190 "data_size": 7936 00:17:35.190 }, 00:17:35.190 { 00:17:35.190 "name": "BaseBdev2", 00:17:35.190 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:35.190 "is_configured": true, 00:17:35.190 "data_offset": 256, 00:17:35.190 "data_size": 7936 00:17:35.190 } 00:17:35.190 ] 00:17:35.190 }' 00:17:35.190 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.190 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.449 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.449 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.449 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.449 [2024-11-26 20:30:28.930848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.449 [2024-11-26 20:30:28.932945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:17:35.449 [2024-11-26 20:30:28.935357] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.449 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.449 20:30:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.826 "name": "raid_bdev1", 00:17:36.826 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:36.826 "strip_size_kb": 0, 00:17:36.826 "state": "online", 00:17:36.826 "raid_level": "raid1", 00:17:36.826 "superblock": true, 00:17:36.826 "num_base_bdevs": 2, 00:17:36.826 "num_base_bdevs_discovered": 2, 00:17:36.826 "num_base_bdevs_operational": 2, 00:17:36.826 "process": { 00:17:36.826 "type": "rebuild", 00:17:36.826 "target": "spare", 00:17:36.826 "progress": { 00:17:36.826 "blocks": 2560, 00:17:36.826 "percent": 32 00:17:36.826 } 00:17:36.826 }, 00:17:36.826 "base_bdevs_list": [ 00:17:36.826 { 00:17:36.826 "name": "spare", 00:17:36.826 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:36.826 "is_configured": true, 00:17:36.826 "data_offset": 256, 00:17:36.826 "data_size": 7936 00:17:36.826 }, 00:17:36.826 { 00:17:36.826 "name": "BaseBdev2", 00:17:36.826 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:36.826 "is_configured": true, 00:17:36.826 "data_offset": 256, 00:17:36.826 "data_size": 7936 00:17:36.826 } 00:17:36.826 ] 00:17:36.826 }' 00:17:36.826 20:30:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.826 [2024-11-26 20:30:30.103481] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.826 [2024-11-26 20:30:30.145436] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.826 [2024-11-26 20:30:30.145688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.826 [2024-11-26 20:30:30.145717] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.826 [2024-11-26 20:30:30.145727] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.826 "name": "raid_bdev1", 00:17:36.826 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:36.826 "strip_size_kb": 0, 00:17:36.826 "state": "online", 00:17:36.826 "raid_level": "raid1", 00:17:36.826 "superblock": true, 00:17:36.826 "num_base_bdevs": 2, 00:17:36.826 "num_base_bdevs_discovered": 1, 00:17:36.826 "num_base_bdevs_operational": 1, 00:17:36.826 "base_bdevs_list": [ 00:17:36.826 { 00:17:36.826 "name": null, 00:17:36.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.826 "is_configured": false, 00:17:36.826 "data_offset": 0, 00:17:36.826 "data_size": 7936 00:17:36.826 }, 00:17:36.826 { 00:17:36.826 "name": "BaseBdev2", 00:17:36.826 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:36.826 "is_configured": true, 00:17:36.826 "data_offset": 256, 00:17:36.826 "data_size": 7936 00:17:36.826 } 00:17:36.826 ] 00:17:36.826 }' 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.826 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.099 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.358 "name": "raid_bdev1", 00:17:37.358 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:37.358 "strip_size_kb": 0, 00:17:37.358 "state": "online", 00:17:37.358 "raid_level": "raid1", 00:17:37.358 "superblock": true, 00:17:37.358 "num_base_bdevs": 2, 00:17:37.358 "num_base_bdevs_discovered": 1, 00:17:37.358 "num_base_bdevs_operational": 1, 00:17:37.358 "base_bdevs_list": [ 00:17:37.358 { 00:17:37.358 "name": null, 00:17:37.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.358 "is_configured": false, 00:17:37.358 "data_offset": 0, 00:17:37.358 "data_size": 7936 00:17:37.358 }, 00:17:37.358 { 00:17:37.358 "name": "BaseBdev2", 00:17:37.358 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:37.358 "is_configured": true, 00:17:37.358 "data_offset": 256, 00:17:37.358 "data_size": 7936 00:17:37.358 } 00:17:37.358 ] 00:17:37.358 }' 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.358 [2024-11-26 20:30:30.789827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.358 [2024-11-26 20:30:30.791861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:17:37.358 [2024-11-26 20:30:30.794149] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.358 20:30:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.295 "name": "raid_bdev1", 00:17:38.295 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:38.295 "strip_size_kb": 0, 00:17:38.295 "state": "online", 00:17:38.295 "raid_level": "raid1", 00:17:38.295 "superblock": true, 00:17:38.295 "num_base_bdevs": 2, 00:17:38.295 "num_base_bdevs_discovered": 2, 00:17:38.295 "num_base_bdevs_operational": 2, 00:17:38.295 "process": { 00:17:38.295 "type": "rebuild", 00:17:38.295 "target": "spare", 00:17:38.295 "progress": { 00:17:38.295 "blocks": 2560, 00:17:38.295 "percent": 32 00:17:38.295 } 00:17:38.295 }, 00:17:38.295 "base_bdevs_list": [ 00:17:38.295 { 00:17:38.295 "name": "spare", 00:17:38.295 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:38.295 "is_configured": true, 00:17:38.295 "data_offset": 256, 00:17:38.295 "data_size": 7936 00:17:38.295 }, 00:17:38.295 { 00:17:38.295 "name": "BaseBdev2", 00:17:38.295 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:38.295 "is_configured": true, 00:17:38.295 "data_offset": 256, 00:17:38.295 "data_size": 7936 00:17:38.295 } 00:17:38.295 ] 00:17:38.295 }' 00:17:38.295 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:38.555 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=620 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.555 "name": "raid_bdev1", 00:17:38.555 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:38.555 "strip_size_kb": 0, 00:17:38.555 "state": "online", 00:17:38.555 "raid_level": "raid1", 00:17:38.555 "superblock": true, 00:17:38.555 "num_base_bdevs": 2, 00:17:38.555 "num_base_bdevs_discovered": 2, 00:17:38.555 "num_base_bdevs_operational": 2, 00:17:38.555 "process": { 00:17:38.555 "type": "rebuild", 00:17:38.555 "target": "spare", 00:17:38.555 "progress": { 00:17:38.555 "blocks": 2816, 00:17:38.555 "percent": 35 00:17:38.555 } 00:17:38.555 }, 00:17:38.555 "base_bdevs_list": [ 00:17:38.555 { 00:17:38.555 "name": "spare", 00:17:38.555 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:38.555 "is_configured": true, 00:17:38.555 "data_offset": 256, 00:17:38.555 "data_size": 7936 00:17:38.555 }, 00:17:38.555 { 00:17:38.555 "name": "BaseBdev2", 00:17:38.555 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:38.555 "is_configured": true, 00:17:38.555 "data_offset": 256, 00:17:38.555 "data_size": 7936 00:17:38.555 } 00:17:38.555 ] 00:17:38.555 }' 00:17:38.555 20:30:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.555 20:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.555 20:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.555 20:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.555 20:30:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.936 "name": "raid_bdev1", 00:17:39.936 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:39.936 "strip_size_kb": 0, 00:17:39.936 "state": "online", 00:17:39.936 "raid_level": "raid1", 00:17:39.936 "superblock": true, 00:17:39.936 "num_base_bdevs": 2, 00:17:39.936 "num_base_bdevs_discovered": 2, 00:17:39.936 "num_base_bdevs_operational": 2, 00:17:39.936 "process": { 00:17:39.936 "type": "rebuild", 00:17:39.936 "target": "spare", 00:17:39.936 "progress": { 00:17:39.936 "blocks": 5632, 00:17:39.936 "percent": 70 00:17:39.936 } 00:17:39.936 }, 00:17:39.936 "base_bdevs_list": [ 00:17:39.936 { 00:17:39.936 "name": "spare", 00:17:39.936 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:39.936 "is_configured": true, 00:17:39.936 "data_offset": 256, 00:17:39.936 "data_size": 7936 00:17:39.936 }, 00:17:39.936 { 00:17:39.936 "name": "BaseBdev2", 00:17:39.936 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:39.936 "is_configured": true, 00:17:39.936 "data_offset": 256, 00:17:39.936 "data_size": 7936 00:17:39.936 } 00:17:39.936 ] 00:17:39.936 }' 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.936 20:30:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.503 [2024-11-26 20:30:33.916956] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:40.503 [2024-11-26 20:30:33.917101] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:40.503 [2024-11-26 20:30:33.917254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.762 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.048 "name": "raid_bdev1", 00:17:41.048 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:41.048 "strip_size_kb": 0, 00:17:41.048 "state": "online", 00:17:41.048 "raid_level": "raid1", 00:17:41.048 "superblock": true, 00:17:41.048 "num_base_bdevs": 2, 00:17:41.048 "num_base_bdevs_discovered": 2, 00:17:41.048 "num_base_bdevs_operational": 2, 00:17:41.048 "base_bdevs_list": [ 00:17:41.048 { 00:17:41.048 "name": "spare", 00:17:41.048 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:41.048 "is_configured": true, 00:17:41.048 "data_offset": 256, 00:17:41.048 "data_size": 7936 00:17:41.048 }, 00:17:41.048 { 00:17:41.048 "name": "BaseBdev2", 00:17:41.048 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:41.048 "is_configured": true, 00:17:41.048 "data_offset": 256, 00:17:41.048 "data_size": 7936 00:17:41.048 } 00:17:41.048 ] 00:17:41.048 }' 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.048 "name": "raid_bdev1", 00:17:41.048 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:41.048 "strip_size_kb": 0, 00:17:41.048 "state": "online", 00:17:41.048 "raid_level": "raid1", 00:17:41.048 "superblock": true, 00:17:41.048 "num_base_bdevs": 2, 00:17:41.048 "num_base_bdevs_discovered": 2, 00:17:41.048 "num_base_bdevs_operational": 2, 00:17:41.048 "base_bdevs_list": [ 00:17:41.048 { 00:17:41.048 "name": "spare", 00:17:41.048 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:41.048 "is_configured": true, 00:17:41.048 "data_offset": 256, 00:17:41.048 "data_size": 7936 00:17:41.048 }, 00:17:41.048 { 00:17:41.048 "name": "BaseBdev2", 00:17:41.048 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:41.048 "is_configured": true, 00:17:41.048 "data_offset": 256, 00:17:41.048 "data_size": 7936 00:17:41.048 } 00:17:41.048 ] 00:17:41.048 }' 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.048 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.308 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.308 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.308 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.308 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.308 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.308 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.309 "name": "raid_bdev1", 00:17:41.309 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:41.309 "strip_size_kb": 0, 00:17:41.309 "state": "online", 00:17:41.309 "raid_level": "raid1", 00:17:41.309 "superblock": true, 00:17:41.309 "num_base_bdevs": 2, 00:17:41.309 "num_base_bdevs_discovered": 2, 00:17:41.309 "num_base_bdevs_operational": 2, 00:17:41.309 "base_bdevs_list": [ 00:17:41.309 { 00:17:41.309 "name": "spare", 00:17:41.309 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:41.309 "is_configured": true, 00:17:41.309 "data_offset": 256, 00:17:41.309 "data_size": 7936 00:17:41.309 }, 00:17:41.309 { 00:17:41.309 "name": "BaseBdev2", 00:17:41.309 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:41.309 "is_configured": true, 00:17:41.309 "data_offset": 256, 00:17:41.309 "data_size": 7936 00:17:41.309 } 00:17:41.309 ] 00:17:41.309 }' 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.309 20:30:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.569 [2024-11-26 20:30:35.077189] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.569 [2024-11-26 20:30:35.077326] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.569 [2024-11-26 20:30:35.077467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.569 [2024-11-26 20:30:35.077610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.569 [2024-11-26 20:30:35.077707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.569 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:41.828 /dev/nbd0 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.088 1+0 records in 00:17:42.088 1+0 records out 00:17:42.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386713 s, 10.6 MB/s 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.088 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:42.347 /dev/nbd1 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.347 1+0 records in 00:17:42.347 1+0 records out 00:17:42.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300313 s, 13.6 MB/s 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:42.347 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.348 20:30:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.606 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.174 [2024-11-26 20:30:36.451931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:43.174 [2024-11-26 20:30:36.452013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.174 [2024-11-26 20:30:36.452039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:43.174 [2024-11-26 20:30:36.452055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.174 [2024-11-26 20:30:36.454539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.174 [2024-11-26 20:30:36.454706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:43.174 [2024-11-26 20:30:36.454800] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:43.174 [2024-11-26 20:30:36.454860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.174 [2024-11-26 20:30:36.455012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.174 spare 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.174 [2024-11-26 20:30:36.554955] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:43.174 [2024-11-26 20:30:36.555016] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.174 [2024-11-26 20:30:36.555192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:17:43.174 [2024-11-26 20:30:36.555363] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:43.174 [2024-11-26 20:30:36.555377] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:43.174 [2024-11-26 20:30:36.555523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.174 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.174 "name": "raid_bdev1", 00:17:43.174 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:43.174 "strip_size_kb": 0, 00:17:43.174 "state": "online", 00:17:43.174 "raid_level": "raid1", 00:17:43.174 "superblock": true, 00:17:43.174 "num_base_bdevs": 2, 00:17:43.174 "num_base_bdevs_discovered": 2, 00:17:43.174 "num_base_bdevs_operational": 2, 00:17:43.175 "base_bdevs_list": [ 00:17:43.175 { 00:17:43.175 "name": "spare", 00:17:43.175 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:43.175 "is_configured": true, 00:17:43.175 "data_offset": 256, 00:17:43.175 "data_size": 7936 00:17:43.175 }, 00:17:43.175 { 00:17:43.175 "name": "BaseBdev2", 00:17:43.175 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:43.175 "is_configured": true, 00:17:43.175 "data_offset": 256, 00:17:43.175 "data_size": 7936 00:17:43.175 } 00:17:43.175 ] 00:17:43.175 }' 00:17:43.175 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.175 20:30:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.744 "name": "raid_bdev1", 00:17:43.744 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:43.744 "strip_size_kb": 0, 00:17:43.744 "state": "online", 00:17:43.744 "raid_level": "raid1", 00:17:43.744 "superblock": true, 00:17:43.744 "num_base_bdevs": 2, 00:17:43.744 "num_base_bdevs_discovered": 2, 00:17:43.744 "num_base_bdevs_operational": 2, 00:17:43.744 "base_bdevs_list": [ 00:17:43.744 { 00:17:43.744 "name": "spare", 00:17:43.744 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:43.744 "is_configured": true, 00:17:43.744 "data_offset": 256, 00:17:43.744 "data_size": 7936 00:17:43.744 }, 00:17:43.744 { 00:17:43.744 "name": "BaseBdev2", 00:17:43.744 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:43.744 "is_configured": true, 00:17:43.744 "data_offset": 256, 00:17:43.744 "data_size": 7936 00:17:43.744 } 00:17:43.744 ] 00:17:43.744 }' 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.744 [2024-11-26 20:30:37.223564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.744 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.744 "name": "raid_bdev1", 00:17:43.745 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:43.745 "strip_size_kb": 0, 00:17:43.745 "state": "online", 00:17:43.745 "raid_level": "raid1", 00:17:43.745 "superblock": true, 00:17:43.745 "num_base_bdevs": 2, 00:17:43.745 "num_base_bdevs_discovered": 1, 00:17:43.745 "num_base_bdevs_operational": 1, 00:17:43.745 "base_bdevs_list": [ 00:17:43.745 { 00:17:43.745 "name": null, 00:17:43.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.745 "is_configured": false, 00:17:43.745 "data_offset": 0, 00:17:43.745 "data_size": 7936 00:17:43.745 }, 00:17:43.745 { 00:17:43.745 "name": "BaseBdev2", 00:17:43.745 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:43.745 "is_configured": true, 00:17:43.745 "data_offset": 256, 00:17:43.745 "data_size": 7936 00:17:43.745 } 00:17:43.745 ] 00:17:43.745 }' 00:17:43.745 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.745 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.374 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.375 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.375 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.375 [2024-11-26 20:30:37.658851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.375 [2024-11-26 20:30:37.659071] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.375 [2024-11-26 20:30:37.659097] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:44.375 [2024-11-26 20:30:37.659152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.375 [2024-11-26 20:30:37.661096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:17:44.375 [2024-11-26 20:30:37.663446] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.375 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.375 20:30:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.311 "name": "raid_bdev1", 00:17:45.311 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:45.311 "strip_size_kb": 0, 00:17:45.311 "state": "online", 00:17:45.311 "raid_level": "raid1", 00:17:45.311 "superblock": true, 00:17:45.311 "num_base_bdevs": 2, 00:17:45.311 "num_base_bdevs_discovered": 2, 00:17:45.311 "num_base_bdevs_operational": 2, 00:17:45.311 "process": { 00:17:45.311 "type": "rebuild", 00:17:45.311 "target": "spare", 00:17:45.311 "progress": { 00:17:45.311 "blocks": 2560, 00:17:45.311 "percent": 32 00:17:45.311 } 00:17:45.311 }, 00:17:45.311 "base_bdevs_list": [ 00:17:45.311 { 00:17:45.311 "name": "spare", 00:17:45.311 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:45.311 "is_configured": true, 00:17:45.311 "data_offset": 256, 00:17:45.311 "data_size": 7936 00:17:45.311 }, 00:17:45.311 { 00:17:45.311 "name": "BaseBdev2", 00:17:45.311 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:45.311 "is_configured": true, 00:17:45.311 "data_offset": 256, 00:17:45.311 "data_size": 7936 00:17:45.311 } 00:17:45.311 ] 00:17:45.311 }' 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.311 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.311 [2024-11-26 20:30:38.819037] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.571 [2024-11-26 20:30:38.872364] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.571 [2024-11-26 20:30:38.872555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.571 [2024-11-26 20:30:38.872582] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.571 [2024-11-26 20:30:38.872591] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.571 "name": "raid_bdev1", 00:17:45.571 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:45.571 "strip_size_kb": 0, 00:17:45.571 "state": "online", 00:17:45.571 "raid_level": "raid1", 00:17:45.571 "superblock": true, 00:17:45.571 "num_base_bdevs": 2, 00:17:45.571 "num_base_bdevs_discovered": 1, 00:17:45.571 "num_base_bdevs_operational": 1, 00:17:45.571 "base_bdevs_list": [ 00:17:45.571 { 00:17:45.571 "name": null, 00:17:45.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.571 "is_configured": false, 00:17:45.571 "data_offset": 0, 00:17:45.571 "data_size": 7936 00:17:45.571 }, 00:17:45.571 { 00:17:45.571 "name": "BaseBdev2", 00:17:45.571 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:45.571 "is_configured": true, 00:17:45.571 "data_offset": 256, 00:17:45.571 "data_size": 7936 00:17:45.571 } 00:17:45.571 ] 00:17:45.571 }' 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.571 20:30:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.835 20:30:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:45.835 20:30:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.835 20:30:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.835 [2024-11-26 20:30:39.349201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:45.835 [2024-11-26 20:30:39.349292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.835 [2024-11-26 20:30:39.349325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:45.835 [2024-11-26 20:30:39.349338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.835 [2024-11-26 20:30:39.349607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.835 [2024-11-26 20:30:39.349652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:45.835 [2024-11-26 20:30:39.349736] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:45.835 [2024-11-26 20:30:39.349752] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.835 [2024-11-26 20:30:39.349771] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.835 [2024-11-26 20:30:39.349812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.835 [2024-11-26 20:30:39.351635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:45.835 [2024-11-26 20:30:39.354036] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.835 spare 00:17:45.835 20:30:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.835 20:30:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:47.214 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.214 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.214 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.214 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.215 "name": "raid_bdev1", 00:17:47.215 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:47.215 "strip_size_kb": 0, 00:17:47.215 "state": "online", 00:17:47.215 "raid_level": "raid1", 00:17:47.215 "superblock": true, 00:17:47.215 "num_base_bdevs": 2, 00:17:47.215 "num_base_bdevs_discovered": 2, 00:17:47.215 "num_base_bdevs_operational": 2, 00:17:47.215 "process": { 00:17:47.215 "type": "rebuild", 00:17:47.215 "target": "spare", 00:17:47.215 "progress": { 00:17:47.215 "blocks": 2560, 00:17:47.215 "percent": 32 00:17:47.215 } 00:17:47.215 }, 00:17:47.215 "base_bdevs_list": [ 00:17:47.215 { 00:17:47.215 "name": "spare", 00:17:47.215 "uuid": "e4c67f78-9053-5daf-b9e3-042fef67881b", 00:17:47.215 "is_configured": true, 00:17:47.215 "data_offset": 256, 00:17:47.215 "data_size": 7936 00:17:47.215 }, 00:17:47.215 { 00:17:47.215 "name": "BaseBdev2", 00:17:47.215 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:47.215 "is_configured": true, 00:17:47.215 "data_offset": 256, 00:17:47.215 "data_size": 7936 00:17:47.215 } 00:17:47.215 ] 00:17:47.215 }' 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.215 [2024-11-26 20:30:40.502062] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.215 [2024-11-26 20:30:40.563480] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:47.215 [2024-11-26 20:30:40.563711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.215 [2024-11-26 20:30:40.563759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.215 [2024-11-26 20:30:40.563797] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.215 "name": "raid_bdev1", 00:17:47.215 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:47.215 "strip_size_kb": 0, 00:17:47.215 "state": "online", 00:17:47.215 "raid_level": "raid1", 00:17:47.215 "superblock": true, 00:17:47.215 "num_base_bdevs": 2, 00:17:47.215 "num_base_bdevs_discovered": 1, 00:17:47.215 "num_base_bdevs_operational": 1, 00:17:47.215 "base_bdevs_list": [ 00:17:47.215 { 00:17:47.215 "name": null, 00:17:47.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.215 "is_configured": false, 00:17:47.215 "data_offset": 0, 00:17:47.215 "data_size": 7936 00:17:47.215 }, 00:17:47.215 { 00:17:47.215 "name": "BaseBdev2", 00:17:47.215 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:47.215 "is_configured": true, 00:17:47.215 "data_offset": 256, 00:17:47.215 "data_size": 7936 00:17:47.215 } 00:17:47.215 ] 00:17:47.215 }' 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.215 20:30:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.782 "name": "raid_bdev1", 00:17:47.782 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:47.782 "strip_size_kb": 0, 00:17:47.782 "state": "online", 00:17:47.782 "raid_level": "raid1", 00:17:47.782 "superblock": true, 00:17:47.782 "num_base_bdevs": 2, 00:17:47.782 "num_base_bdevs_discovered": 1, 00:17:47.782 "num_base_bdevs_operational": 1, 00:17:47.782 "base_bdevs_list": [ 00:17:47.782 { 00:17:47.782 "name": null, 00:17:47.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.782 "is_configured": false, 00:17:47.782 "data_offset": 0, 00:17:47.782 "data_size": 7936 00:17:47.782 }, 00:17:47.782 { 00:17:47.782 "name": "BaseBdev2", 00:17:47.782 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:47.782 "is_configured": true, 00:17:47.782 "data_offset": 256, 00:17:47.782 "data_size": 7936 00:17:47.782 } 00:17:47.782 ] 00:17:47.782 }' 00:17:47.782 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.783 [2024-11-26 20:30:41.203809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:47.783 [2024-11-26 20:30:41.203968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.783 [2024-11-26 20:30:41.204000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:47.783 [2024-11-26 20:30:41.204014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.783 [2024-11-26 20:30:41.204253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.783 [2024-11-26 20:30:41.204274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:47.783 [2024-11-26 20:30:41.204341] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:47.783 [2024-11-26 20:30:41.204363] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.783 [2024-11-26 20:30:41.204372] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:47.783 [2024-11-26 20:30:41.204390] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:47.783 BaseBdev1 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.783 20:30:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.717 "name": "raid_bdev1", 00:17:48.717 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:48.717 "strip_size_kb": 0, 00:17:48.717 "state": "online", 00:17:48.717 "raid_level": "raid1", 00:17:48.717 "superblock": true, 00:17:48.717 "num_base_bdevs": 2, 00:17:48.717 "num_base_bdevs_discovered": 1, 00:17:48.717 "num_base_bdevs_operational": 1, 00:17:48.717 "base_bdevs_list": [ 00:17:48.717 { 00:17:48.717 "name": null, 00:17:48.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.717 "is_configured": false, 00:17:48.717 "data_offset": 0, 00:17:48.717 "data_size": 7936 00:17:48.717 }, 00:17:48.717 { 00:17:48.717 "name": "BaseBdev2", 00:17:48.717 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:48.717 "is_configured": true, 00:17:48.717 "data_offset": 256, 00:17:48.717 "data_size": 7936 00:17:48.717 } 00:17:48.717 ] 00:17:48.717 }' 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.717 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.286 "name": "raid_bdev1", 00:17:49.286 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:49.286 "strip_size_kb": 0, 00:17:49.286 "state": "online", 00:17:49.286 "raid_level": "raid1", 00:17:49.286 "superblock": true, 00:17:49.286 "num_base_bdevs": 2, 00:17:49.286 "num_base_bdevs_discovered": 1, 00:17:49.286 "num_base_bdevs_operational": 1, 00:17:49.286 "base_bdevs_list": [ 00:17:49.286 { 00:17:49.286 "name": null, 00:17:49.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.286 "is_configured": false, 00:17:49.286 "data_offset": 0, 00:17:49.286 "data_size": 7936 00:17:49.286 }, 00:17:49.286 { 00:17:49.286 "name": "BaseBdev2", 00:17:49.286 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:49.286 "is_configured": true, 00:17:49.286 "data_offset": 256, 00:17:49.286 "data_size": 7936 00:17:49.286 } 00:17:49.286 ] 00:17:49.286 }' 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.286 [2024-11-26 20:30:42.829835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.286 [2024-11-26 20:30:42.830037] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.286 [2024-11-26 20:30:42.830053] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:49.286 request: 00:17:49.286 { 00:17:49.286 "base_bdev": "BaseBdev1", 00:17:49.286 "raid_bdev": "raid_bdev1", 00:17:49.286 "method": "bdev_raid_add_base_bdev", 00:17:49.286 "req_id": 1 00:17:49.286 } 00:17:49.286 Got JSON-RPC error response 00:17:49.286 response: 00:17:49.286 { 00:17:49.286 "code": -22, 00:17:49.286 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:49.286 } 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:49.286 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:49.546 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:49.546 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:49.546 20:30:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.483 "name": "raid_bdev1", 00:17:50.483 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:50.483 "strip_size_kb": 0, 00:17:50.483 "state": "online", 00:17:50.483 "raid_level": "raid1", 00:17:50.483 "superblock": true, 00:17:50.483 "num_base_bdevs": 2, 00:17:50.483 "num_base_bdevs_discovered": 1, 00:17:50.483 "num_base_bdevs_operational": 1, 00:17:50.483 "base_bdevs_list": [ 00:17:50.483 { 00:17:50.483 "name": null, 00:17:50.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.483 "is_configured": false, 00:17:50.483 "data_offset": 0, 00:17:50.483 "data_size": 7936 00:17:50.483 }, 00:17:50.483 { 00:17:50.483 "name": "BaseBdev2", 00:17:50.483 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:50.483 "is_configured": true, 00:17:50.483 "data_offset": 256, 00:17:50.483 "data_size": 7936 00:17:50.483 } 00:17:50.483 ] 00:17:50.483 }' 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.483 20:30:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.091 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.091 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.091 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.092 "name": "raid_bdev1", 00:17:51.092 "uuid": "83b99969-9e6f-4240-8eb9-ec66a1895554", 00:17:51.092 "strip_size_kb": 0, 00:17:51.092 "state": "online", 00:17:51.092 "raid_level": "raid1", 00:17:51.092 "superblock": true, 00:17:51.092 "num_base_bdevs": 2, 00:17:51.092 "num_base_bdevs_discovered": 1, 00:17:51.092 "num_base_bdevs_operational": 1, 00:17:51.092 "base_bdevs_list": [ 00:17:51.092 { 00:17:51.092 "name": null, 00:17:51.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.092 "is_configured": false, 00:17:51.092 "data_offset": 0, 00:17:51.092 "data_size": 7936 00:17:51.092 }, 00:17:51.092 { 00:17:51.092 "name": "BaseBdev2", 00:17:51.092 "uuid": "6f1208c3-7423-5a7b-bb77-ae15dcbc6317", 00:17:51.092 "is_configured": true, 00:17:51.092 "data_offset": 256, 00:17:51.092 "data_size": 7936 00:17:51.092 } 00:17:51.092 ] 00:17:51.092 }' 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98720 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98720 ']' 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98720 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98720 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98720' 00:17:51.092 killing process with pid 98720 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98720 00:17:51.092 Received shutdown signal, test time was about 60.000000 seconds 00:17:51.092 00:17:51.092 Latency(us) 00:17:51.092 [2024-11-26T20:30:44.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.092 [2024-11-26T20:30:44.644Z] =================================================================================================================== 00:17:51.092 [2024-11-26T20:30:44.644Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:51.092 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98720 00:17:51.092 [2024-11-26 20:30:44.485067] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.092 [2024-11-26 20:30:44.485254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.092 [2024-11-26 20:30:44.485355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.092 [2024-11-26 20:30:44.485371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:51.092 [2024-11-26 20:30:44.541369] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.352 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:51.352 00:17:51.352 real 0m19.466s 00:17:51.352 user 0m26.025s 00:17:51.352 sys 0m2.618s 00:17:51.352 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.352 20:30:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.352 ************************************ 00:17:51.352 END TEST raid_rebuild_test_sb_md_separate 00:17:51.352 ************************************ 00:17:51.610 20:30:44 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:51.610 20:30:44 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:51.610 20:30:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:51.610 20:30:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.610 20:30:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.610 ************************************ 00:17:51.610 START TEST raid_state_function_test_sb_md_interleaved 00:17:51.610 ************************************ 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:51.610 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=99406 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:51.611 Process raid pid: 99406 00:17:51.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99406' 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 99406 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99406 ']' 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.611 20:30:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.611 [2024-11-26 20:30:45.051505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:51.611 [2024-11-26 20:30:45.051837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.869 [2024-11-26 20:30:45.221699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.870 [2024-11-26 20:30:45.333523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.870 [2024-11-26 20:30:45.417024] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.870 [2024-11-26 20:30:45.417074] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.808 [2024-11-26 20:30:46.080394] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.808 [2024-11-26 20:30:46.080554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.808 [2024-11-26 20:30:46.080577] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.808 [2024-11-26 20:30:46.080592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.808 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.808 "name": "Existed_Raid", 00:17:52.808 "uuid": "701f41b4-b5cc-44bc-8cbc-cfcaf3e29f1b", 00:17:52.808 "strip_size_kb": 0, 00:17:52.808 "state": "configuring", 00:17:52.808 "raid_level": "raid1", 00:17:52.808 "superblock": true, 00:17:52.808 "num_base_bdevs": 2, 00:17:52.808 "num_base_bdevs_discovered": 0, 00:17:52.808 "num_base_bdevs_operational": 2, 00:17:52.808 "base_bdevs_list": [ 00:17:52.808 { 00:17:52.808 "name": "BaseBdev1", 00:17:52.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.808 "is_configured": false, 00:17:52.808 "data_offset": 0, 00:17:52.808 "data_size": 0 00:17:52.808 }, 00:17:52.809 { 00:17:52.809 "name": "BaseBdev2", 00:17:52.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.809 "is_configured": false, 00:17:52.809 "data_offset": 0, 00:17:52.809 "data_size": 0 00:17:52.809 } 00:17:52.809 ] 00:17:52.809 }' 00:17:52.809 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.809 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.068 [2024-11-26 20:30:46.563604] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.068 [2024-11-26 20:30:46.563778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.068 [2024-11-26 20:30:46.571670] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.068 [2024-11-26 20:30:46.571797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.068 [2024-11-26 20:30:46.571846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.068 [2024-11-26 20:30:46.571898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.068 [2024-11-26 20:30:46.591005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.068 BaseBdev1 00:17:53.068 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.069 [ 00:17:53.069 { 00:17:53.069 "name": "BaseBdev1", 00:17:53.069 "aliases": [ 00:17:53.069 "8c155bb9-21fc-48f7-9ec4-5893e8379496" 00:17:53.069 ], 00:17:53.069 "product_name": "Malloc disk", 00:17:53.069 "block_size": 4128, 00:17:53.069 "num_blocks": 8192, 00:17:53.069 "uuid": "8c155bb9-21fc-48f7-9ec4-5893e8379496", 00:17:53.069 "md_size": 32, 00:17:53.069 "md_interleave": true, 00:17:53.069 "dif_type": 0, 00:17:53.069 "assigned_rate_limits": { 00:17:53.069 "rw_ios_per_sec": 0, 00:17:53.069 "rw_mbytes_per_sec": 0, 00:17:53.069 "r_mbytes_per_sec": 0, 00:17:53.069 "w_mbytes_per_sec": 0 00:17:53.069 }, 00:17:53.069 "claimed": true, 00:17:53.069 "claim_type": "exclusive_write", 00:17:53.069 "zoned": false, 00:17:53.069 "supported_io_types": { 00:17:53.069 "read": true, 00:17:53.069 "write": true, 00:17:53.069 "unmap": true, 00:17:53.069 "flush": true, 00:17:53.069 "reset": true, 00:17:53.069 "nvme_admin": false, 00:17:53.069 "nvme_io": false, 00:17:53.069 "nvme_io_md": false, 00:17:53.069 "write_zeroes": true, 00:17:53.069 "zcopy": true, 00:17:53.069 "get_zone_info": false, 00:17:53.069 "zone_management": false, 00:17:53.069 "zone_append": false, 00:17:53.069 "compare": false, 00:17:53.069 "compare_and_write": false, 00:17:53.069 "abort": true, 00:17:53.069 "seek_hole": false, 00:17:53.069 "seek_data": false, 00:17:53.069 "copy": true, 00:17:53.069 "nvme_iov_md": false 00:17:53.069 }, 00:17:53.069 "memory_domains": [ 00:17:53.069 { 00:17:53.069 "dma_device_id": "system", 00:17:53.069 "dma_device_type": 1 00:17:53.069 }, 00:17:53.069 { 00:17:53.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.069 "dma_device_type": 2 00:17:53.069 } 00:17:53.069 ], 00:17:53.069 "driver_specific": {} 00:17:53.069 } 00:17:53.069 ] 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.069 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.329 "name": "Existed_Raid", 00:17:53.329 "uuid": "fb53a396-f09d-49da-8cd1-8ae62590f0c6", 00:17:53.329 "strip_size_kb": 0, 00:17:53.329 "state": "configuring", 00:17:53.329 "raid_level": "raid1", 00:17:53.329 "superblock": true, 00:17:53.329 "num_base_bdevs": 2, 00:17:53.329 "num_base_bdevs_discovered": 1, 00:17:53.329 "num_base_bdevs_operational": 2, 00:17:53.329 "base_bdevs_list": [ 00:17:53.329 { 00:17:53.329 "name": "BaseBdev1", 00:17:53.329 "uuid": "8c155bb9-21fc-48f7-9ec4-5893e8379496", 00:17:53.329 "is_configured": true, 00:17:53.329 "data_offset": 256, 00:17:53.329 "data_size": 7936 00:17:53.329 }, 00:17:53.329 { 00:17:53.329 "name": "BaseBdev2", 00:17:53.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.329 "is_configured": false, 00:17:53.329 "data_offset": 0, 00:17:53.329 "data_size": 0 00:17:53.329 } 00:17:53.329 ] 00:17:53.329 }' 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.329 20:30:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.589 [2024-11-26 20:30:47.078610] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.589 [2024-11-26 20:30:47.078779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.589 [2024-11-26 20:30:47.086748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.589 [2024-11-26 20:30:47.089184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.589 [2024-11-26 20:30:47.089315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.589 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.848 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.848 "name": "Existed_Raid", 00:17:53.848 "uuid": "4327a14b-4277-48c2-a759-72d3bb16c954", 00:17:53.848 "strip_size_kb": 0, 00:17:53.848 "state": "configuring", 00:17:53.848 "raid_level": "raid1", 00:17:53.848 "superblock": true, 00:17:53.848 "num_base_bdevs": 2, 00:17:53.848 "num_base_bdevs_discovered": 1, 00:17:53.848 "num_base_bdevs_operational": 2, 00:17:53.848 "base_bdevs_list": [ 00:17:53.848 { 00:17:53.848 "name": "BaseBdev1", 00:17:53.848 "uuid": "8c155bb9-21fc-48f7-9ec4-5893e8379496", 00:17:53.848 "is_configured": true, 00:17:53.848 "data_offset": 256, 00:17:53.848 "data_size": 7936 00:17:53.848 }, 00:17:53.849 { 00:17:53.849 "name": "BaseBdev2", 00:17:53.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.849 "is_configured": false, 00:17:53.849 "data_offset": 0, 00:17:53.849 "data_size": 0 00:17:53.849 } 00:17:53.849 ] 00:17:53.849 }' 00:17:53.849 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.849 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.109 BaseBdev2 00:17:54.109 [2024-11-26 20:30:47.590963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.109 [2024-11-26 20:30:47.591193] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:54.109 [2024-11-26 20:30:47.591210] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:54.109 [2024-11-26 20:30:47.591330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:54.109 [2024-11-26 20:30:47.591424] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:54.109 [2024-11-26 20:30:47.591446] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:17:54.109 [2024-11-26 20:30:47.591520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.109 [ 00:17:54.109 { 00:17:54.109 "name": "BaseBdev2", 00:17:54.109 "aliases": [ 00:17:54.109 "914facd6-065d-4322-8f4e-6cae33807cb7" 00:17:54.109 ], 00:17:54.109 "product_name": "Malloc disk", 00:17:54.109 "block_size": 4128, 00:17:54.109 "num_blocks": 8192, 00:17:54.109 "uuid": "914facd6-065d-4322-8f4e-6cae33807cb7", 00:17:54.109 "md_size": 32, 00:17:54.109 "md_interleave": true, 00:17:54.109 "dif_type": 0, 00:17:54.109 "assigned_rate_limits": { 00:17:54.109 "rw_ios_per_sec": 0, 00:17:54.109 "rw_mbytes_per_sec": 0, 00:17:54.109 "r_mbytes_per_sec": 0, 00:17:54.109 "w_mbytes_per_sec": 0 00:17:54.109 }, 00:17:54.109 "claimed": true, 00:17:54.109 "claim_type": "exclusive_write", 00:17:54.109 "zoned": false, 00:17:54.109 "supported_io_types": { 00:17:54.109 "read": true, 00:17:54.109 "write": true, 00:17:54.109 "unmap": true, 00:17:54.109 "flush": true, 00:17:54.109 "reset": true, 00:17:54.109 "nvme_admin": false, 00:17:54.109 "nvme_io": false, 00:17:54.109 "nvme_io_md": false, 00:17:54.109 "write_zeroes": true, 00:17:54.109 "zcopy": true, 00:17:54.109 "get_zone_info": false, 00:17:54.109 "zone_management": false, 00:17:54.109 "zone_append": false, 00:17:54.109 "compare": false, 00:17:54.109 "compare_and_write": false, 00:17:54.109 "abort": true, 00:17:54.109 "seek_hole": false, 00:17:54.109 "seek_data": false, 00:17:54.109 "copy": true, 00:17:54.109 "nvme_iov_md": false 00:17:54.109 }, 00:17:54.109 "memory_domains": [ 00:17:54.109 { 00:17:54.109 "dma_device_id": "system", 00:17:54.109 "dma_device_type": 1 00:17:54.109 }, 00:17:54.109 { 00:17:54.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.109 "dma_device_type": 2 00:17:54.109 } 00:17:54.109 ], 00:17:54.109 "driver_specific": {} 00:17:54.109 } 00:17:54.109 ] 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.109 "name": "Existed_Raid", 00:17:54.109 "uuid": "4327a14b-4277-48c2-a759-72d3bb16c954", 00:17:54.109 "strip_size_kb": 0, 00:17:54.109 "state": "online", 00:17:54.109 "raid_level": "raid1", 00:17:54.109 "superblock": true, 00:17:54.109 "num_base_bdevs": 2, 00:17:54.109 "num_base_bdevs_discovered": 2, 00:17:54.109 "num_base_bdevs_operational": 2, 00:17:54.109 "base_bdevs_list": [ 00:17:54.109 { 00:17:54.109 "name": "BaseBdev1", 00:17:54.109 "uuid": "8c155bb9-21fc-48f7-9ec4-5893e8379496", 00:17:54.109 "is_configured": true, 00:17:54.109 "data_offset": 256, 00:17:54.109 "data_size": 7936 00:17:54.109 }, 00:17:54.109 { 00:17:54.109 "name": "BaseBdev2", 00:17:54.109 "uuid": "914facd6-065d-4322-8f4e-6cae33807cb7", 00:17:54.109 "is_configured": true, 00:17:54.109 "data_offset": 256, 00:17:54.109 "data_size": 7936 00:17:54.109 } 00:17:54.109 ] 00:17:54.109 }' 00:17:54.109 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.110 20:30:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:54.739 [2024-11-26 20:30:48.087273] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.739 "name": "Existed_Raid", 00:17:54.739 "aliases": [ 00:17:54.739 "4327a14b-4277-48c2-a759-72d3bb16c954" 00:17:54.739 ], 00:17:54.739 "product_name": "Raid Volume", 00:17:54.739 "block_size": 4128, 00:17:54.739 "num_blocks": 7936, 00:17:54.739 "uuid": "4327a14b-4277-48c2-a759-72d3bb16c954", 00:17:54.739 "md_size": 32, 00:17:54.739 "md_interleave": true, 00:17:54.739 "dif_type": 0, 00:17:54.739 "assigned_rate_limits": { 00:17:54.739 "rw_ios_per_sec": 0, 00:17:54.739 "rw_mbytes_per_sec": 0, 00:17:54.739 "r_mbytes_per_sec": 0, 00:17:54.739 "w_mbytes_per_sec": 0 00:17:54.739 }, 00:17:54.739 "claimed": false, 00:17:54.739 "zoned": false, 00:17:54.739 "supported_io_types": { 00:17:54.739 "read": true, 00:17:54.739 "write": true, 00:17:54.739 "unmap": false, 00:17:54.739 "flush": false, 00:17:54.739 "reset": true, 00:17:54.739 "nvme_admin": false, 00:17:54.739 "nvme_io": false, 00:17:54.739 "nvme_io_md": false, 00:17:54.739 "write_zeroes": true, 00:17:54.739 "zcopy": false, 00:17:54.739 "get_zone_info": false, 00:17:54.739 "zone_management": false, 00:17:54.739 "zone_append": false, 00:17:54.739 "compare": false, 00:17:54.739 "compare_and_write": false, 00:17:54.739 "abort": false, 00:17:54.739 "seek_hole": false, 00:17:54.739 "seek_data": false, 00:17:54.739 "copy": false, 00:17:54.739 "nvme_iov_md": false 00:17:54.739 }, 00:17:54.739 "memory_domains": [ 00:17:54.739 { 00:17:54.739 "dma_device_id": "system", 00:17:54.739 "dma_device_type": 1 00:17:54.739 }, 00:17:54.739 { 00:17:54.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.739 "dma_device_type": 2 00:17:54.739 }, 00:17:54.739 { 00:17:54.739 "dma_device_id": "system", 00:17:54.739 "dma_device_type": 1 00:17:54.739 }, 00:17:54.739 { 00:17:54.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.739 "dma_device_type": 2 00:17:54.739 } 00:17:54.739 ], 00:17:54.739 "driver_specific": { 00:17:54.739 "raid": { 00:17:54.739 "uuid": "4327a14b-4277-48c2-a759-72d3bb16c954", 00:17:54.739 "strip_size_kb": 0, 00:17:54.739 "state": "online", 00:17:54.739 "raid_level": "raid1", 00:17:54.739 "superblock": true, 00:17:54.739 "num_base_bdevs": 2, 00:17:54.739 "num_base_bdevs_discovered": 2, 00:17:54.739 "num_base_bdevs_operational": 2, 00:17:54.739 "base_bdevs_list": [ 00:17:54.739 { 00:17:54.739 "name": "BaseBdev1", 00:17:54.739 "uuid": "8c155bb9-21fc-48f7-9ec4-5893e8379496", 00:17:54.739 "is_configured": true, 00:17:54.739 "data_offset": 256, 00:17:54.739 "data_size": 7936 00:17:54.739 }, 00:17:54.739 { 00:17:54.739 "name": "BaseBdev2", 00:17:54.739 "uuid": "914facd6-065d-4322-8f4e-6cae33807cb7", 00:17:54.739 "is_configured": true, 00:17:54.739 "data_offset": 256, 00:17:54.739 "data_size": 7936 00:17:54.739 } 00:17:54.739 ] 00:17:54.739 } 00:17:54.739 } 00:17:54.739 }' 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:54.739 BaseBdev2' 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.739 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.740 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.999 [2024-11-26 20:30:48.318887] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.999 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.000 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.000 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.000 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.000 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.000 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.000 "name": "Existed_Raid", 00:17:55.000 "uuid": "4327a14b-4277-48c2-a759-72d3bb16c954", 00:17:55.000 "strip_size_kb": 0, 00:17:55.000 "state": "online", 00:17:55.000 "raid_level": "raid1", 00:17:55.000 "superblock": true, 00:17:55.000 "num_base_bdevs": 2, 00:17:55.000 "num_base_bdevs_discovered": 1, 00:17:55.000 "num_base_bdevs_operational": 1, 00:17:55.000 "base_bdevs_list": [ 00:17:55.000 { 00:17:55.000 "name": null, 00:17:55.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.000 "is_configured": false, 00:17:55.000 "data_offset": 0, 00:17:55.000 "data_size": 7936 00:17:55.000 }, 00:17:55.000 { 00:17:55.000 "name": "BaseBdev2", 00:17:55.000 "uuid": "914facd6-065d-4322-8f4e-6cae33807cb7", 00:17:55.000 "is_configured": true, 00:17:55.000 "data_offset": 256, 00:17:55.000 "data_size": 7936 00:17:55.000 } 00:17:55.000 ] 00:17:55.000 }' 00:17:55.000 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.000 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.261 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:55.261 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:55.261 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.261 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:55.261 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.261 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.520 [2024-11-26 20:30:48.857276] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:55.520 [2024-11-26 20:30:48.857423] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.520 [2024-11-26 20:30:48.879988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.520 [2024-11-26 20:30:48.880140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.520 [2024-11-26 20:30:48.880163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 99406 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99406 ']' 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99406 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99406 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99406' 00:17:55.520 killing process with pid 99406 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99406 00:17:55.520 20:30:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99406 00:17:55.520 [2024-11-26 20:30:48.972018] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.520 [2024-11-26 20:30:48.973964] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.783 20:30:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:55.783 00:17:55.783 real 0m4.392s 00:17:55.783 user 0m6.867s 00:17:55.783 sys 0m0.827s 00:17:55.783 20:30:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.783 20:30:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.783 ************************************ 00:17:55.783 END TEST raid_state_function_test_sb_md_interleaved 00:17:55.783 ************************************ 00:17:56.044 20:30:49 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:56.044 20:30:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:56.044 20:30:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.044 20:30:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.044 ************************************ 00:17:56.044 START TEST raid_superblock_test_md_interleaved 00:17:56.044 ************************************ 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99653 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99653 00:17:56.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99653 ']' 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.044 20:30:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.044 [2024-11-26 20:30:49.483172] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:56.044 [2024-11-26 20:30:49.483566] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99653 ] 00:17:56.304 [2024-11-26 20:30:49.650710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.304 [2024-11-26 20:30:49.757469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.304 [2024-11-26 20:30:49.834819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.304 [2024-11-26 20:30:49.834871] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.243 malloc1 00:17:57.243 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.244 [2024-11-26 20:30:50.522310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.244 [2024-11-26 20:30:50.522482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.244 [2024-11-26 20:30:50.522521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:57.244 [2024-11-26 20:30:50.522544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.244 [2024-11-26 20:30:50.525042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.244 [2024-11-26 20:30:50.525098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.244 pt1 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.244 malloc2 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.244 [2024-11-26 20:30:50.562069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.244 [2024-11-26 20:30:50.562249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.244 [2024-11-26 20:30:50.562328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:57.244 [2024-11-26 20:30:50.562375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.244 [2024-11-26 20:30:50.564840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.244 [2024-11-26 20:30:50.564953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.244 pt2 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.244 [2024-11-26 20:30:50.570106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.244 [2024-11-26 20:30:50.572514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.244 [2024-11-26 20:30:50.572840] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:57.244 [2024-11-26 20:30:50.572906] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:57.244 [2024-11-26 20:30:50.573107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:57.244 [2024-11-26 20:30:50.573251] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:57.244 [2024-11-26 20:30:50.573307] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:57.244 [2024-11-26 20:30:50.573485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.244 "name": "raid_bdev1", 00:17:57.244 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:57.244 "strip_size_kb": 0, 00:17:57.244 "state": "online", 00:17:57.244 "raid_level": "raid1", 00:17:57.244 "superblock": true, 00:17:57.244 "num_base_bdevs": 2, 00:17:57.244 "num_base_bdevs_discovered": 2, 00:17:57.244 "num_base_bdevs_operational": 2, 00:17:57.244 "base_bdevs_list": [ 00:17:57.244 { 00:17:57.244 "name": "pt1", 00:17:57.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.244 "is_configured": true, 00:17:57.244 "data_offset": 256, 00:17:57.244 "data_size": 7936 00:17:57.244 }, 00:17:57.244 { 00:17:57.244 "name": "pt2", 00:17:57.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.244 "is_configured": true, 00:17:57.244 "data_offset": 256, 00:17:57.244 "data_size": 7936 00:17:57.244 } 00:17:57.244 ] 00:17:57.244 }' 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.244 20:30:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.504 [2024-11-26 20:30:51.041863] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.504 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.764 "name": "raid_bdev1", 00:17:57.764 "aliases": [ 00:17:57.764 "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9" 00:17:57.764 ], 00:17:57.764 "product_name": "Raid Volume", 00:17:57.764 "block_size": 4128, 00:17:57.764 "num_blocks": 7936, 00:17:57.764 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:57.764 "md_size": 32, 00:17:57.764 "md_interleave": true, 00:17:57.764 "dif_type": 0, 00:17:57.764 "assigned_rate_limits": { 00:17:57.764 "rw_ios_per_sec": 0, 00:17:57.764 "rw_mbytes_per_sec": 0, 00:17:57.764 "r_mbytes_per_sec": 0, 00:17:57.764 "w_mbytes_per_sec": 0 00:17:57.764 }, 00:17:57.764 "claimed": false, 00:17:57.764 "zoned": false, 00:17:57.764 "supported_io_types": { 00:17:57.764 "read": true, 00:17:57.764 "write": true, 00:17:57.764 "unmap": false, 00:17:57.764 "flush": false, 00:17:57.764 "reset": true, 00:17:57.764 "nvme_admin": false, 00:17:57.764 "nvme_io": false, 00:17:57.764 "nvme_io_md": false, 00:17:57.764 "write_zeroes": true, 00:17:57.764 "zcopy": false, 00:17:57.764 "get_zone_info": false, 00:17:57.764 "zone_management": false, 00:17:57.764 "zone_append": false, 00:17:57.764 "compare": false, 00:17:57.764 "compare_and_write": false, 00:17:57.764 "abort": false, 00:17:57.764 "seek_hole": false, 00:17:57.764 "seek_data": false, 00:17:57.764 "copy": false, 00:17:57.764 "nvme_iov_md": false 00:17:57.764 }, 00:17:57.764 "memory_domains": [ 00:17:57.764 { 00:17:57.764 "dma_device_id": "system", 00:17:57.764 "dma_device_type": 1 00:17:57.764 }, 00:17:57.764 { 00:17:57.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.764 "dma_device_type": 2 00:17:57.764 }, 00:17:57.764 { 00:17:57.764 "dma_device_id": "system", 00:17:57.764 "dma_device_type": 1 00:17:57.764 }, 00:17:57.764 { 00:17:57.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.764 "dma_device_type": 2 00:17:57.764 } 00:17:57.764 ], 00:17:57.764 "driver_specific": { 00:17:57.764 "raid": { 00:17:57.764 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:57.764 "strip_size_kb": 0, 00:17:57.764 "state": "online", 00:17:57.764 "raid_level": "raid1", 00:17:57.764 "superblock": true, 00:17:57.764 "num_base_bdevs": 2, 00:17:57.764 "num_base_bdevs_discovered": 2, 00:17:57.764 "num_base_bdevs_operational": 2, 00:17:57.764 "base_bdevs_list": [ 00:17:57.764 { 00:17:57.764 "name": "pt1", 00:17:57.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.764 "is_configured": true, 00:17:57.764 "data_offset": 256, 00:17:57.764 "data_size": 7936 00:17:57.764 }, 00:17:57.764 { 00:17:57.764 "name": "pt2", 00:17:57.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.764 "is_configured": true, 00:17:57.764 "data_offset": 256, 00:17:57.764 "data_size": 7936 00:17:57.764 } 00:17:57.764 ] 00:17:57.764 } 00:17:57.764 } 00:17:57.764 }' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:57.764 pt2' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.764 [2024-11-26 20:30:51.277533] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.764 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.025 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2fb51efb-4f14-4d8a-a1b8-6191b46f77a9 00:17:58.025 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 2fb51efb-4f14-4d8a-a1b8-6191b46f77a9 ']' 00:17:58.025 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:58.025 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.025 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.025 [2024-11-26 20:30:51.321235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.025 [2024-11-26 20:30:51.321363] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.025 [2024-11-26 20:30:51.321513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.025 [2024-11-26 20:30:51.321686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.025 [2024-11-26 20:30:51.321755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:58.025 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.025 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.025 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.026 [2024-11-26 20:30:51.449283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:58.026 [2024-11-26 20:30:51.451748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:58.026 [2024-11-26 20:30:51.451863] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:58.026 [2024-11-26 20:30:51.451944] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:58.026 [2024-11-26 20:30:51.451968] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.026 [2024-11-26 20:30:51.451980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:17:58.026 request: 00:17:58.026 { 00:17:58.026 "name": "raid_bdev1", 00:17:58.026 "raid_level": "raid1", 00:17:58.026 "base_bdevs": [ 00:17:58.026 "malloc1", 00:17:58.026 "malloc2" 00:17:58.026 ], 00:17:58.026 "superblock": false, 00:17:58.026 "method": "bdev_raid_create", 00:17:58.026 "req_id": 1 00:17:58.026 } 00:17:58.026 Got JSON-RPC error response 00:17:58.026 response: 00:17:58.026 { 00:17:58.026 "code": -17, 00:17:58.026 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:58.026 } 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.026 [2024-11-26 20:30:51.509202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.026 [2024-11-26 20:30:51.509290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.026 [2024-11-26 20:30:51.509315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:58.026 [2024-11-26 20:30:51.509325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.026 [2024-11-26 20:30:51.511685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.026 [2024-11-26 20:30:51.511739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.026 [2024-11-26 20:30:51.511815] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:58.026 [2024-11-26 20:30:51.511880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.026 pt1 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.026 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.027 "name": "raid_bdev1", 00:17:58.027 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:58.027 "strip_size_kb": 0, 00:17:58.027 "state": "configuring", 00:17:58.027 "raid_level": "raid1", 00:17:58.027 "superblock": true, 00:17:58.027 "num_base_bdevs": 2, 00:17:58.027 "num_base_bdevs_discovered": 1, 00:17:58.027 "num_base_bdevs_operational": 2, 00:17:58.027 "base_bdevs_list": [ 00:17:58.027 { 00:17:58.027 "name": "pt1", 00:17:58.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.027 "is_configured": true, 00:17:58.027 "data_offset": 256, 00:17:58.027 "data_size": 7936 00:17:58.027 }, 00:17:58.027 { 00:17:58.027 "name": null, 00:17:58.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.027 "is_configured": false, 00:17:58.027 "data_offset": 256, 00:17:58.027 "data_size": 7936 00:17:58.027 } 00:17:58.027 ] 00:17:58.027 }' 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.027 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.596 [2024-11-26 20:30:51.977280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.596 [2024-11-26 20:30:51.977411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.596 [2024-11-26 20:30:51.977455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:58.596 [2024-11-26 20:30:51.977472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.596 [2024-11-26 20:30:51.977746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.596 [2024-11-26 20:30:51.977774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.596 [2024-11-26 20:30:51.977869] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.596 [2024-11-26 20:30:51.977916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.596 [2024-11-26 20:30:51.978053] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:17:58.596 [2024-11-26 20:30:51.978069] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:58.596 [2024-11-26 20:30:51.978209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:58.596 [2024-11-26 20:30:51.978309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:17:58.596 [2024-11-26 20:30:51.978341] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:17:58.596 [2024-11-26 20:30:51.978437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.596 pt2 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.596 20:30:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.596 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.596 "name": "raid_bdev1", 00:17:58.596 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:58.596 "strip_size_kb": 0, 00:17:58.596 "state": "online", 00:17:58.596 "raid_level": "raid1", 00:17:58.596 "superblock": true, 00:17:58.596 "num_base_bdevs": 2, 00:17:58.596 "num_base_bdevs_discovered": 2, 00:17:58.596 "num_base_bdevs_operational": 2, 00:17:58.596 "base_bdevs_list": [ 00:17:58.596 { 00:17:58.596 "name": "pt1", 00:17:58.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.596 "is_configured": true, 00:17:58.596 "data_offset": 256, 00:17:58.596 "data_size": 7936 00:17:58.596 }, 00:17:58.596 { 00:17:58.596 "name": "pt2", 00:17:58.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.597 "is_configured": true, 00:17:58.597 "data_offset": 256, 00:17:58.597 "data_size": 7936 00:17:58.597 } 00:17:58.597 ] 00:17:58.597 }' 00:17:58.597 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.597 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.167 [2024-11-26 20:30:52.445547] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.167 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.167 "name": "raid_bdev1", 00:17:59.167 "aliases": [ 00:17:59.167 "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9" 00:17:59.167 ], 00:17:59.167 "product_name": "Raid Volume", 00:17:59.167 "block_size": 4128, 00:17:59.167 "num_blocks": 7936, 00:17:59.167 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:59.167 "md_size": 32, 00:17:59.167 "md_interleave": true, 00:17:59.167 "dif_type": 0, 00:17:59.167 "assigned_rate_limits": { 00:17:59.167 "rw_ios_per_sec": 0, 00:17:59.167 "rw_mbytes_per_sec": 0, 00:17:59.167 "r_mbytes_per_sec": 0, 00:17:59.167 "w_mbytes_per_sec": 0 00:17:59.167 }, 00:17:59.167 "claimed": false, 00:17:59.167 "zoned": false, 00:17:59.167 "supported_io_types": { 00:17:59.167 "read": true, 00:17:59.167 "write": true, 00:17:59.167 "unmap": false, 00:17:59.167 "flush": false, 00:17:59.167 "reset": true, 00:17:59.167 "nvme_admin": false, 00:17:59.167 "nvme_io": false, 00:17:59.167 "nvme_io_md": false, 00:17:59.167 "write_zeroes": true, 00:17:59.167 "zcopy": false, 00:17:59.167 "get_zone_info": false, 00:17:59.167 "zone_management": false, 00:17:59.167 "zone_append": false, 00:17:59.167 "compare": false, 00:17:59.167 "compare_and_write": false, 00:17:59.167 "abort": false, 00:17:59.167 "seek_hole": false, 00:17:59.167 "seek_data": false, 00:17:59.167 "copy": false, 00:17:59.167 "nvme_iov_md": false 00:17:59.167 }, 00:17:59.167 "memory_domains": [ 00:17:59.167 { 00:17:59.167 "dma_device_id": "system", 00:17:59.167 "dma_device_type": 1 00:17:59.167 }, 00:17:59.167 { 00:17:59.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.168 "dma_device_type": 2 00:17:59.168 }, 00:17:59.168 { 00:17:59.168 "dma_device_id": "system", 00:17:59.168 "dma_device_type": 1 00:17:59.168 }, 00:17:59.168 { 00:17:59.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.168 "dma_device_type": 2 00:17:59.168 } 00:17:59.168 ], 00:17:59.168 "driver_specific": { 00:17:59.168 "raid": { 00:17:59.168 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:59.168 "strip_size_kb": 0, 00:17:59.168 "state": "online", 00:17:59.168 "raid_level": "raid1", 00:17:59.168 "superblock": true, 00:17:59.168 "num_base_bdevs": 2, 00:17:59.168 "num_base_bdevs_discovered": 2, 00:17:59.168 "num_base_bdevs_operational": 2, 00:17:59.168 "base_bdevs_list": [ 00:17:59.168 { 00:17:59.168 "name": "pt1", 00:17:59.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.168 "is_configured": true, 00:17:59.168 "data_offset": 256, 00:17:59.168 "data_size": 7936 00:17:59.168 }, 00:17:59.168 { 00:17:59.168 "name": "pt2", 00:17:59.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.168 "is_configured": true, 00:17:59.168 "data_offset": 256, 00:17:59.168 "data_size": 7936 00:17:59.168 } 00:17:59.168 ] 00:17:59.168 } 00:17:59.168 } 00:17:59.168 }' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.168 pt2' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:59.168 [2024-11-26 20:30:52.633535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 2fb51efb-4f14-4d8a-a1b8-6191b46f77a9 '!=' 2fb51efb-4f14-4d8a-a1b8-6191b46f77a9 ']' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.168 [2024-11-26 20:30:52.681267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.168 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.427 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.427 "name": "raid_bdev1", 00:17:59.427 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:59.427 "strip_size_kb": 0, 00:17:59.427 "state": "online", 00:17:59.427 "raid_level": "raid1", 00:17:59.427 "superblock": true, 00:17:59.427 "num_base_bdevs": 2, 00:17:59.427 "num_base_bdevs_discovered": 1, 00:17:59.427 "num_base_bdevs_operational": 1, 00:17:59.427 "base_bdevs_list": [ 00:17:59.427 { 00:17:59.427 "name": null, 00:17:59.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.427 "is_configured": false, 00:17:59.427 "data_offset": 0, 00:17:59.427 "data_size": 7936 00:17:59.427 }, 00:17:59.427 { 00:17:59.427 "name": "pt2", 00:17:59.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.427 "is_configured": true, 00:17:59.427 "data_offset": 256, 00:17:59.427 "data_size": 7936 00:17:59.427 } 00:17:59.427 ] 00:17:59.427 }' 00:17:59.427 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.427 20:30:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.687 [2024-11-26 20:30:53.161175] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.687 [2024-11-26 20:30:53.161294] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.687 [2024-11-26 20:30:53.161431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.687 [2024-11-26 20:30:53.161526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.687 [2024-11-26 20:30:53.161579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.687 [2024-11-26 20:30:53.217203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.687 [2024-11-26 20:30:53.217291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.687 [2024-11-26 20:30:53.217316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:59.687 [2024-11-26 20:30:53.217328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.687 [2024-11-26 20:30:53.219804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.687 [2024-11-26 20:30:53.219867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.687 [2024-11-26 20:30:53.219953] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:59.687 [2024-11-26 20:30:53.219999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.687 [2024-11-26 20:30:53.220077] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:17:59.687 [2024-11-26 20:30:53.220086] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:59.687 [2024-11-26 20:30:53.220210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:59.687 [2024-11-26 20:30:53.220291] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:17:59.687 [2024-11-26 20:30:53.220304] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:17:59.687 [2024-11-26 20:30:53.220378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.687 pt2 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.687 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.947 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.947 "name": "raid_bdev1", 00:17:59.947 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:17:59.947 "strip_size_kb": 0, 00:17:59.947 "state": "online", 00:17:59.947 "raid_level": "raid1", 00:17:59.947 "superblock": true, 00:17:59.947 "num_base_bdevs": 2, 00:17:59.947 "num_base_bdevs_discovered": 1, 00:17:59.947 "num_base_bdevs_operational": 1, 00:17:59.947 "base_bdevs_list": [ 00:17:59.947 { 00:17:59.947 "name": null, 00:17:59.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.947 "is_configured": false, 00:17:59.947 "data_offset": 256, 00:17:59.947 "data_size": 7936 00:17:59.947 }, 00:17:59.947 { 00:17:59.947 "name": "pt2", 00:17:59.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.947 "is_configured": true, 00:17:59.947 "data_offset": 256, 00:17:59.947 "data_size": 7936 00:17:59.947 } 00:17:59.947 ] 00:17:59.947 }' 00:17:59.947 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.947 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.207 [2024-11-26 20:30:53.689198] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.207 [2024-11-26 20:30:53.689237] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.207 [2024-11-26 20:30:53.689336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.207 [2024-11-26 20:30:53.689395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.207 [2024-11-26 20:30:53.689409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.207 [2024-11-26 20:30:53.749207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.207 [2024-11-26 20:30:53.749301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.207 [2024-11-26 20:30:53.749327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:00.207 [2024-11-26 20:30:53.749347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.207 [2024-11-26 20:30:53.751858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.207 [2024-11-26 20:30:53.751926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.207 [2024-11-26 20:30:53.752005] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.207 [2024-11-26 20:30:53.752063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.207 [2024-11-26 20:30:53.752169] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:00.207 [2024-11-26 20:30:53.752189] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.207 [2024-11-26 20:30:53.752217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:18:00.207 [2024-11-26 20:30:53.752277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.207 [2024-11-26 20:30:53.752356] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:18:00.207 [2024-11-26 20:30:53.752372] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:00.207 [2024-11-26 20:30:53.752466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:00.207 [2024-11-26 20:30:53.752539] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:18:00.207 [2024-11-26 20:30:53.752551] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:18:00.207 [2024-11-26 20:30:53.752662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.207 pt1 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.207 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.208 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.208 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.208 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.208 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.467 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.467 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.467 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.467 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.467 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.467 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.467 "name": "raid_bdev1", 00:18:00.467 "uuid": "2fb51efb-4f14-4d8a-a1b8-6191b46f77a9", 00:18:00.467 "strip_size_kb": 0, 00:18:00.467 "state": "online", 00:18:00.467 "raid_level": "raid1", 00:18:00.467 "superblock": true, 00:18:00.467 "num_base_bdevs": 2, 00:18:00.467 "num_base_bdevs_discovered": 1, 00:18:00.467 "num_base_bdevs_operational": 1, 00:18:00.467 "base_bdevs_list": [ 00:18:00.467 { 00:18:00.467 "name": null, 00:18:00.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.467 "is_configured": false, 00:18:00.467 "data_offset": 256, 00:18:00.467 "data_size": 7936 00:18:00.467 }, 00:18:00.467 { 00:18:00.467 "name": "pt2", 00:18:00.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.467 "is_configured": true, 00:18:00.467 "data_offset": 256, 00:18:00.467 "data_size": 7936 00:18:00.467 } 00:18:00.467 ] 00:18:00.467 }' 00:18:00.467 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.467 20:30:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.727 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:00.727 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.727 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.727 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:00.727 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.988 [2024-11-26 20:30:54.321514] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 2fb51efb-4f14-4d8a-a1b8-6191b46f77a9 '!=' 2fb51efb-4f14-4d8a-a1b8-6191b46f77a9 ']' 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99653 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99653 ']' 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99653 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99653 00:18:00.988 killing process with pid 99653 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99653' 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99653 00:18:00.988 [2024-11-26 20:30:54.395762] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:00.988 [2024-11-26 20:30:54.395880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.988 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99653 00:18:00.988 [2024-11-26 20:30:54.395946] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.988 [2024-11-26 20:30:54.395958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:18:00.988 [2024-11-26 20:30:54.435094] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.248 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:01.248 00:18:01.248 real 0m5.424s 00:18:01.248 user 0m8.817s 00:18:01.248 sys 0m0.991s 00:18:01.248 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.248 20:30:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.508 ************************************ 00:18:01.508 END TEST raid_superblock_test_md_interleaved 00:18:01.508 ************************************ 00:18:01.508 20:30:54 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:01.508 20:30:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:01.508 20:30:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.508 20:30:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.508 ************************************ 00:18:01.508 START TEST raid_rebuild_test_sb_md_interleaved 00:18:01.508 ************************************ 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:01.508 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:01.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99965 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99965 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99965 ']' 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.509 20:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.509 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:01.509 Zero copy mechanism will not be used. 00:18:01.509 [2024-11-26 20:30:54.926827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:01.509 [2024-11-26 20:30:54.926979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99965 ] 00:18:01.768 [2024-11-26 20:30:55.086849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.768 [2024-11-26 20:30:55.191379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.768 [2024-11-26 20:30:55.278441] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.768 [2024-11-26 20:30:55.278595] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 BaseBdev1_malloc 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.707 20:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 [2024-11-26 20:30:56.003299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:02.707 [2024-11-26 20:30:56.003406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.707 [2024-11-26 20:30:56.003450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:02.707 [2024-11-26 20:30:56.003463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.707 [2024-11-26 20:30:56.006044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.707 [2024-11-26 20:30:56.006110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.707 BaseBdev1 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 BaseBdev2_malloc 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.707 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 [2024-11-26 20:30:56.036473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:02.707 [2024-11-26 20:30:56.036565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.707 [2024-11-26 20:30:56.036600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:02.707 [2024-11-26 20:30:56.036634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.707 [2024-11-26 20:30:56.039690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.707 [2024-11-26 20:30:56.039760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:02.707 BaseBdev2 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.708 spare_malloc 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.708 spare_delay 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.708 [2024-11-26 20:30:56.070546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.708 [2024-11-26 20:30:56.070658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.708 [2024-11-26 20:30:56.070705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:02.708 [2024-11-26 20:30:56.070720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.708 [2024-11-26 20:30:56.073207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.708 [2024-11-26 20:30:56.073266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.708 spare 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.708 [2024-11-26 20:30:56.078568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.708 [2024-11-26 20:30:56.080999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.708 [2024-11-26 20:30:56.081253] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:18:02.708 [2024-11-26 20:30:56.081270] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:02.708 [2024-11-26 20:30:56.081411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:02.708 [2024-11-26 20:30:56.081488] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:18:02.708 [2024-11-26 20:30:56.081507] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:18:02.708 [2024-11-26 20:30:56.081635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.708 "name": "raid_bdev1", 00:18:02.708 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:02.708 "strip_size_kb": 0, 00:18:02.708 "state": "online", 00:18:02.708 "raid_level": "raid1", 00:18:02.708 "superblock": true, 00:18:02.708 "num_base_bdevs": 2, 00:18:02.708 "num_base_bdevs_discovered": 2, 00:18:02.708 "num_base_bdevs_operational": 2, 00:18:02.708 "base_bdevs_list": [ 00:18:02.708 { 00:18:02.708 "name": "BaseBdev1", 00:18:02.708 "uuid": "e62b975a-2de7-5544-b971-f296bea1a8d9", 00:18:02.708 "is_configured": true, 00:18:02.708 "data_offset": 256, 00:18:02.708 "data_size": 7936 00:18:02.708 }, 00:18:02.708 { 00:18:02.708 "name": "BaseBdev2", 00:18:02.708 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:02.708 "is_configured": true, 00:18:02.708 "data_offset": 256, 00:18:02.708 "data_size": 7936 00:18:02.708 } 00:18:02.708 ] 00:18:02.708 }' 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.708 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.276 [2024-11-26 20:30:56.538163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:03.276 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.277 [2024-11-26 20:30:56.633810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.277 "name": "raid_bdev1", 00:18:03.277 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:03.277 "strip_size_kb": 0, 00:18:03.277 "state": "online", 00:18:03.277 "raid_level": "raid1", 00:18:03.277 "superblock": true, 00:18:03.277 "num_base_bdevs": 2, 00:18:03.277 "num_base_bdevs_discovered": 1, 00:18:03.277 "num_base_bdevs_operational": 1, 00:18:03.277 "base_bdevs_list": [ 00:18:03.277 { 00:18:03.277 "name": null, 00:18:03.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.277 "is_configured": false, 00:18:03.277 "data_offset": 0, 00:18:03.277 "data_size": 7936 00:18:03.277 }, 00:18:03.277 { 00:18:03.277 "name": "BaseBdev2", 00:18:03.277 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:03.277 "is_configured": true, 00:18:03.277 "data_offset": 256, 00:18:03.277 "data_size": 7936 00:18:03.277 } 00:18:03.277 ] 00:18:03.277 }' 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.277 20:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.557 20:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.557 20:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.557 20:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.557 [2024-11-26 20:30:57.073255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.816 [2024-11-26 20:30:57.077799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:03.816 20:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.816 20:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:03.816 [2024-11-26 20:30:57.080285] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.754 "name": "raid_bdev1", 00:18:04.754 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:04.754 "strip_size_kb": 0, 00:18:04.754 "state": "online", 00:18:04.754 "raid_level": "raid1", 00:18:04.754 "superblock": true, 00:18:04.754 "num_base_bdevs": 2, 00:18:04.754 "num_base_bdevs_discovered": 2, 00:18:04.754 "num_base_bdevs_operational": 2, 00:18:04.754 "process": { 00:18:04.754 "type": "rebuild", 00:18:04.754 "target": "spare", 00:18:04.754 "progress": { 00:18:04.754 "blocks": 2560, 00:18:04.754 "percent": 32 00:18:04.754 } 00:18:04.754 }, 00:18:04.754 "base_bdevs_list": [ 00:18:04.754 { 00:18:04.754 "name": "spare", 00:18:04.754 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:04.754 "is_configured": true, 00:18:04.754 "data_offset": 256, 00:18:04.754 "data_size": 7936 00:18:04.754 }, 00:18:04.754 { 00:18:04.754 "name": "BaseBdev2", 00:18:04.754 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:04.754 "is_configured": true, 00:18:04.754 "data_offset": 256, 00:18:04.754 "data_size": 7936 00:18:04.754 } 00:18:04.754 ] 00:18:04.754 }' 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.754 [2024-11-26 20:30:58.234031] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.754 [2024-11-26 20:30:58.289995] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:04.754 [2024-11-26 20:30:58.290190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.754 [2024-11-26 20:30:58.290215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.754 [2024-11-26 20:30:58.290226] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.754 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.755 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.755 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.755 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.755 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.755 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.014 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.014 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.014 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.014 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.014 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.014 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.014 "name": "raid_bdev1", 00:18:05.014 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:05.014 "strip_size_kb": 0, 00:18:05.014 "state": "online", 00:18:05.014 "raid_level": "raid1", 00:18:05.014 "superblock": true, 00:18:05.014 "num_base_bdevs": 2, 00:18:05.014 "num_base_bdevs_discovered": 1, 00:18:05.014 "num_base_bdevs_operational": 1, 00:18:05.014 "base_bdevs_list": [ 00:18:05.014 { 00:18:05.014 "name": null, 00:18:05.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.014 "is_configured": false, 00:18:05.014 "data_offset": 0, 00:18:05.014 "data_size": 7936 00:18:05.014 }, 00:18:05.014 { 00:18:05.014 "name": "BaseBdev2", 00:18:05.014 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:05.014 "is_configured": true, 00:18:05.014 "data_offset": 256, 00:18:05.014 "data_size": 7936 00:18:05.014 } 00:18:05.014 ] 00:18:05.014 }' 00:18:05.014 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.014 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.273 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.533 "name": "raid_bdev1", 00:18:05.533 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:05.533 "strip_size_kb": 0, 00:18:05.533 "state": "online", 00:18:05.533 "raid_level": "raid1", 00:18:05.533 "superblock": true, 00:18:05.533 "num_base_bdevs": 2, 00:18:05.533 "num_base_bdevs_discovered": 1, 00:18:05.533 "num_base_bdevs_operational": 1, 00:18:05.533 "base_bdevs_list": [ 00:18:05.533 { 00:18:05.533 "name": null, 00:18:05.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.533 "is_configured": false, 00:18:05.533 "data_offset": 0, 00:18:05.533 "data_size": 7936 00:18:05.533 }, 00:18:05.533 { 00:18:05.533 "name": "BaseBdev2", 00:18:05.533 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:05.533 "is_configured": true, 00:18:05.533 "data_offset": 256, 00:18:05.533 "data_size": 7936 00:18:05.533 } 00:18:05.533 ] 00:18:05.533 }' 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.533 [2024-11-26 20:30:58.955702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.533 [2024-11-26 20:30:58.960093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.533 20:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:05.533 [2024-11-26 20:30:58.962463] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.466 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.466 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.466 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.466 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.466 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.467 20:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.467 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.467 "name": "raid_bdev1", 00:18:06.467 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:06.467 "strip_size_kb": 0, 00:18:06.467 "state": "online", 00:18:06.467 "raid_level": "raid1", 00:18:06.467 "superblock": true, 00:18:06.467 "num_base_bdevs": 2, 00:18:06.467 "num_base_bdevs_discovered": 2, 00:18:06.467 "num_base_bdevs_operational": 2, 00:18:06.467 "process": { 00:18:06.467 "type": "rebuild", 00:18:06.467 "target": "spare", 00:18:06.467 "progress": { 00:18:06.467 "blocks": 2560, 00:18:06.467 "percent": 32 00:18:06.467 } 00:18:06.467 }, 00:18:06.467 "base_bdevs_list": [ 00:18:06.467 { 00:18:06.467 "name": "spare", 00:18:06.467 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:06.467 "is_configured": true, 00:18:06.467 "data_offset": 256, 00:18:06.467 "data_size": 7936 00:18:06.467 }, 00:18:06.467 { 00:18:06.467 "name": "BaseBdev2", 00:18:06.467 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:06.467 "is_configured": true, 00:18:06.467 "data_offset": 256, 00:18:06.467 "data_size": 7936 00:18:06.467 } 00:18:06.467 ] 00:18:06.467 }' 00:18:06.467 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.725 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:06.726 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=649 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.726 "name": "raid_bdev1", 00:18:06.726 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:06.726 "strip_size_kb": 0, 00:18:06.726 "state": "online", 00:18:06.726 "raid_level": "raid1", 00:18:06.726 "superblock": true, 00:18:06.726 "num_base_bdevs": 2, 00:18:06.726 "num_base_bdevs_discovered": 2, 00:18:06.726 "num_base_bdevs_operational": 2, 00:18:06.726 "process": { 00:18:06.726 "type": "rebuild", 00:18:06.726 "target": "spare", 00:18:06.726 "progress": { 00:18:06.726 "blocks": 2816, 00:18:06.726 "percent": 35 00:18:06.726 } 00:18:06.726 }, 00:18:06.726 "base_bdevs_list": [ 00:18:06.726 { 00:18:06.726 "name": "spare", 00:18:06.726 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:06.726 "is_configured": true, 00:18:06.726 "data_offset": 256, 00:18:06.726 "data_size": 7936 00:18:06.726 }, 00:18:06.726 { 00:18:06.726 "name": "BaseBdev2", 00:18:06.726 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:06.726 "is_configured": true, 00:18:06.726 "data_offset": 256, 00:18:06.726 "data_size": 7936 00:18:06.726 } 00:18:06.726 ] 00:18:06.726 }' 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.726 20:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.104 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.104 "name": "raid_bdev1", 00:18:08.104 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:08.104 "strip_size_kb": 0, 00:18:08.104 "state": "online", 00:18:08.104 "raid_level": "raid1", 00:18:08.104 "superblock": true, 00:18:08.104 "num_base_bdevs": 2, 00:18:08.104 "num_base_bdevs_discovered": 2, 00:18:08.104 "num_base_bdevs_operational": 2, 00:18:08.104 "process": { 00:18:08.104 "type": "rebuild", 00:18:08.105 "target": "spare", 00:18:08.105 "progress": { 00:18:08.105 "blocks": 5632, 00:18:08.105 "percent": 70 00:18:08.105 } 00:18:08.105 }, 00:18:08.105 "base_bdevs_list": [ 00:18:08.105 { 00:18:08.105 "name": "spare", 00:18:08.105 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:08.105 "is_configured": true, 00:18:08.105 "data_offset": 256, 00:18:08.105 "data_size": 7936 00:18:08.105 }, 00:18:08.105 { 00:18:08.105 "name": "BaseBdev2", 00:18:08.105 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:08.105 "is_configured": true, 00:18:08.105 "data_offset": 256, 00:18:08.105 "data_size": 7936 00:18:08.105 } 00:18:08.105 ] 00:18:08.105 }' 00:18:08.105 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.105 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.105 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.105 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.105 20:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:08.674 [2024-11-26 20:31:02.083986] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:08.674 [2024-11-26 20:31:02.084205] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:08.674 [2024-11-26 20:31:02.084412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.936 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.937 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.937 "name": "raid_bdev1", 00:18:08.937 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:08.937 "strip_size_kb": 0, 00:18:08.937 "state": "online", 00:18:08.937 "raid_level": "raid1", 00:18:08.937 "superblock": true, 00:18:08.937 "num_base_bdevs": 2, 00:18:08.937 "num_base_bdevs_discovered": 2, 00:18:08.937 "num_base_bdevs_operational": 2, 00:18:08.937 "base_bdevs_list": [ 00:18:08.937 { 00:18:08.937 "name": "spare", 00:18:08.937 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:08.937 "is_configured": true, 00:18:08.937 "data_offset": 256, 00:18:08.937 "data_size": 7936 00:18:08.937 }, 00:18:08.937 { 00:18:08.937 "name": "BaseBdev2", 00:18:08.937 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:08.937 "is_configured": true, 00:18:08.937 "data_offset": 256, 00:18:08.937 "data_size": 7936 00:18:08.937 } 00:18:08.937 ] 00:18:08.937 }' 00:18:08.937 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.937 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:08.937 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.196 "name": "raid_bdev1", 00:18:09.196 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:09.196 "strip_size_kb": 0, 00:18:09.196 "state": "online", 00:18:09.196 "raid_level": "raid1", 00:18:09.196 "superblock": true, 00:18:09.196 "num_base_bdevs": 2, 00:18:09.196 "num_base_bdevs_discovered": 2, 00:18:09.196 "num_base_bdevs_operational": 2, 00:18:09.196 "base_bdevs_list": [ 00:18:09.196 { 00:18:09.196 "name": "spare", 00:18:09.196 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:09.196 "is_configured": true, 00:18:09.196 "data_offset": 256, 00:18:09.196 "data_size": 7936 00:18:09.196 }, 00:18:09.196 { 00:18:09.196 "name": "BaseBdev2", 00:18:09.196 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:09.196 "is_configured": true, 00:18:09.196 "data_offset": 256, 00:18:09.196 "data_size": 7936 00:18:09.196 } 00:18:09.196 ] 00:18:09.196 }' 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.196 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.196 "name": "raid_bdev1", 00:18:09.196 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:09.197 "strip_size_kb": 0, 00:18:09.197 "state": "online", 00:18:09.197 "raid_level": "raid1", 00:18:09.197 "superblock": true, 00:18:09.197 "num_base_bdevs": 2, 00:18:09.197 "num_base_bdevs_discovered": 2, 00:18:09.197 "num_base_bdevs_operational": 2, 00:18:09.197 "base_bdevs_list": [ 00:18:09.197 { 00:18:09.197 "name": "spare", 00:18:09.197 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:09.197 "is_configured": true, 00:18:09.197 "data_offset": 256, 00:18:09.197 "data_size": 7936 00:18:09.197 }, 00:18:09.197 { 00:18:09.197 "name": "BaseBdev2", 00:18:09.197 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:09.197 "is_configured": true, 00:18:09.197 "data_offset": 256, 00:18:09.197 "data_size": 7936 00:18:09.197 } 00:18:09.197 ] 00:18:09.197 }' 00:18:09.197 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.197 20:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.767 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.768 [2024-11-26 20:31:03.080883] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.768 [2024-11-26 20:31:03.080991] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.768 [2024-11-26 20:31:03.081123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.768 [2024-11-26 20:31:03.081219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.768 [2024-11-26 20:31:03.081234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.768 [2024-11-26 20:31:03.152813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.768 [2024-11-26 20:31:03.152982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.768 [2024-11-26 20:31:03.153049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:09.768 [2024-11-26 20:31:03.153087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.768 [2024-11-26 20:31:03.155519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.768 [2024-11-26 20:31:03.155639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.768 [2024-11-26 20:31:03.155757] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:09.768 [2024-11-26 20:31:03.155844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.768 [2024-11-26 20:31:03.155999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.768 spare 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.768 [2024-11-26 20:31:03.255981] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:18:09.768 [2024-11-26 20:31:03.256128] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:09.768 [2024-11-26 20:31:03.256321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:09.768 [2024-11-26 20:31:03.256455] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:18:09.768 [2024-11-26 20:31:03.256484] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:18:09.768 [2024-11-26 20:31:03.256645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.768 "name": "raid_bdev1", 00:18:09.768 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:09.768 "strip_size_kb": 0, 00:18:09.768 "state": "online", 00:18:09.768 "raid_level": "raid1", 00:18:09.768 "superblock": true, 00:18:09.768 "num_base_bdevs": 2, 00:18:09.768 "num_base_bdevs_discovered": 2, 00:18:09.768 "num_base_bdevs_operational": 2, 00:18:09.768 "base_bdevs_list": [ 00:18:09.768 { 00:18:09.768 "name": "spare", 00:18:09.768 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:09.768 "is_configured": true, 00:18:09.768 "data_offset": 256, 00:18:09.768 "data_size": 7936 00:18:09.768 }, 00:18:09.768 { 00:18:09.768 "name": "BaseBdev2", 00:18:09.768 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:09.768 "is_configured": true, 00:18:09.768 "data_offset": 256, 00:18:09.768 "data_size": 7936 00:18:09.768 } 00:18:09.768 ] 00:18:09.768 }' 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.768 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.335 "name": "raid_bdev1", 00:18:10.335 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:10.335 "strip_size_kb": 0, 00:18:10.335 "state": "online", 00:18:10.335 "raid_level": "raid1", 00:18:10.335 "superblock": true, 00:18:10.335 "num_base_bdevs": 2, 00:18:10.335 "num_base_bdevs_discovered": 2, 00:18:10.335 "num_base_bdevs_operational": 2, 00:18:10.335 "base_bdevs_list": [ 00:18:10.335 { 00:18:10.335 "name": "spare", 00:18:10.335 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:10.335 "is_configured": true, 00:18:10.335 "data_offset": 256, 00:18:10.335 "data_size": 7936 00:18:10.335 }, 00:18:10.335 { 00:18:10.335 "name": "BaseBdev2", 00:18:10.335 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:10.335 "is_configured": true, 00:18:10.335 "data_offset": 256, 00:18:10.335 "data_size": 7936 00:18:10.335 } 00:18:10.335 ] 00:18:10.335 }' 00:18:10.335 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.595 20:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.595 [2024-11-26 20:31:03.999488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.595 "name": "raid_bdev1", 00:18:10.595 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:10.595 "strip_size_kb": 0, 00:18:10.595 "state": "online", 00:18:10.595 "raid_level": "raid1", 00:18:10.595 "superblock": true, 00:18:10.595 "num_base_bdevs": 2, 00:18:10.595 "num_base_bdevs_discovered": 1, 00:18:10.595 "num_base_bdevs_operational": 1, 00:18:10.595 "base_bdevs_list": [ 00:18:10.595 { 00:18:10.595 "name": null, 00:18:10.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.595 "is_configured": false, 00:18:10.595 "data_offset": 0, 00:18:10.595 "data_size": 7936 00:18:10.595 }, 00:18:10.595 { 00:18:10.595 "name": "BaseBdev2", 00:18:10.595 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:10.595 "is_configured": true, 00:18:10.595 "data_offset": 256, 00:18:10.595 "data_size": 7936 00:18:10.595 } 00:18:10.595 ] 00:18:10.595 }' 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.595 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.163 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.163 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.163 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.163 [2024-11-26 20:31:04.510673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.164 [2024-11-26 20:31:04.510968] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:11.164 [2024-11-26 20:31:04.511056] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:11.164 [2024-11-26 20:31:04.511164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.164 [2024-11-26 20:31:04.515306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:11.164 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.164 20:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:11.164 [2024-11-26 20:31:04.517598] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.101 "name": "raid_bdev1", 00:18:12.101 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:12.101 "strip_size_kb": 0, 00:18:12.101 "state": "online", 00:18:12.101 "raid_level": "raid1", 00:18:12.101 "superblock": true, 00:18:12.101 "num_base_bdevs": 2, 00:18:12.101 "num_base_bdevs_discovered": 2, 00:18:12.101 "num_base_bdevs_operational": 2, 00:18:12.101 "process": { 00:18:12.101 "type": "rebuild", 00:18:12.101 "target": "spare", 00:18:12.101 "progress": { 00:18:12.101 "blocks": 2560, 00:18:12.101 "percent": 32 00:18:12.101 } 00:18:12.101 }, 00:18:12.101 "base_bdevs_list": [ 00:18:12.101 { 00:18:12.101 "name": "spare", 00:18:12.101 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:12.101 "is_configured": true, 00:18:12.101 "data_offset": 256, 00:18:12.101 "data_size": 7936 00:18:12.101 }, 00:18:12.101 { 00:18:12.101 "name": "BaseBdev2", 00:18:12.101 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:12.101 "is_configured": true, 00:18:12.101 "data_offset": 256, 00:18:12.101 "data_size": 7936 00:18:12.101 } 00:18:12.101 ] 00:18:12.101 }' 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.101 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.361 [2024-11-26 20:31:05.678442] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.361 [2024-11-26 20:31:05.725133] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:12.361 [2024-11-26 20:31:05.725219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.361 [2024-11-26 20:31:05.725240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.361 [2024-11-26 20:31:05.725249] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.361 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.361 "name": "raid_bdev1", 00:18:12.361 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:12.362 "strip_size_kb": 0, 00:18:12.362 "state": "online", 00:18:12.362 "raid_level": "raid1", 00:18:12.362 "superblock": true, 00:18:12.362 "num_base_bdevs": 2, 00:18:12.362 "num_base_bdevs_discovered": 1, 00:18:12.362 "num_base_bdevs_operational": 1, 00:18:12.362 "base_bdevs_list": [ 00:18:12.362 { 00:18:12.362 "name": null, 00:18:12.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.362 "is_configured": false, 00:18:12.362 "data_offset": 0, 00:18:12.362 "data_size": 7936 00:18:12.362 }, 00:18:12.362 { 00:18:12.362 "name": "BaseBdev2", 00:18:12.362 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:12.362 "is_configured": true, 00:18:12.362 "data_offset": 256, 00:18:12.362 "data_size": 7936 00:18:12.362 } 00:18:12.362 ] 00:18:12.362 }' 00:18:12.362 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.362 20:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.930 20:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.930 20:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.930 20:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.930 [2024-11-26 20:31:06.234351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.930 [2024-11-26 20:31:06.234511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.930 [2024-11-26 20:31:06.234547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:12.930 [2024-11-26 20:31:06.234558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.930 [2024-11-26 20:31:06.234805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.930 [2024-11-26 20:31:06.234822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.930 [2024-11-26 20:31:06.234892] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:12.930 [2024-11-26 20:31:06.234906] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:12.930 [2024-11-26 20:31:06.234919] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:12.930 [2024-11-26 20:31:06.234943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.930 [2024-11-26 20:31:06.239078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:12.930 spare 00:18:12.930 20:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.930 20:31:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:12.930 [2024-11-26 20:31:06.241241] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.865 "name": "raid_bdev1", 00:18:13.865 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:13.865 "strip_size_kb": 0, 00:18:13.865 "state": "online", 00:18:13.865 "raid_level": "raid1", 00:18:13.865 "superblock": true, 00:18:13.865 "num_base_bdevs": 2, 00:18:13.865 "num_base_bdevs_discovered": 2, 00:18:13.865 "num_base_bdevs_operational": 2, 00:18:13.865 "process": { 00:18:13.865 "type": "rebuild", 00:18:13.865 "target": "spare", 00:18:13.865 "progress": { 00:18:13.865 "blocks": 2560, 00:18:13.865 "percent": 32 00:18:13.865 } 00:18:13.865 }, 00:18:13.865 "base_bdevs_list": [ 00:18:13.865 { 00:18:13.865 "name": "spare", 00:18:13.865 "uuid": "61a09fd6-6af7-5bd5-b260-7cac1c8400f6", 00:18:13.865 "is_configured": true, 00:18:13.865 "data_offset": 256, 00:18:13.865 "data_size": 7936 00:18:13.865 }, 00:18:13.865 { 00:18:13.865 "name": "BaseBdev2", 00:18:13.865 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:13.865 "is_configured": true, 00:18:13.865 "data_offset": 256, 00:18:13.865 "data_size": 7936 00:18:13.865 } 00:18:13.865 ] 00:18:13.865 }' 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.865 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.865 [2024-11-26 20:31:07.386053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.123 [2024-11-26 20:31:07.448724] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:14.123 [2024-11-26 20:31:07.448832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.123 [2024-11-26 20:31:07.448850] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.123 [2024-11-26 20:31:07.448861] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.123 "name": "raid_bdev1", 00:18:14.123 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:14.123 "strip_size_kb": 0, 00:18:14.123 "state": "online", 00:18:14.123 "raid_level": "raid1", 00:18:14.123 "superblock": true, 00:18:14.123 "num_base_bdevs": 2, 00:18:14.123 "num_base_bdevs_discovered": 1, 00:18:14.123 "num_base_bdevs_operational": 1, 00:18:14.123 "base_bdevs_list": [ 00:18:14.123 { 00:18:14.123 "name": null, 00:18:14.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.123 "is_configured": false, 00:18:14.123 "data_offset": 0, 00:18:14.123 "data_size": 7936 00:18:14.123 }, 00:18:14.123 { 00:18:14.123 "name": "BaseBdev2", 00:18:14.123 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:14.123 "is_configured": true, 00:18:14.123 "data_offset": 256, 00:18:14.123 "data_size": 7936 00:18:14.123 } 00:18:14.123 ] 00:18:14.123 }' 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.123 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.690 "name": "raid_bdev1", 00:18:14.690 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:14.690 "strip_size_kb": 0, 00:18:14.690 "state": "online", 00:18:14.690 "raid_level": "raid1", 00:18:14.690 "superblock": true, 00:18:14.690 "num_base_bdevs": 2, 00:18:14.690 "num_base_bdevs_discovered": 1, 00:18:14.690 "num_base_bdevs_operational": 1, 00:18:14.690 "base_bdevs_list": [ 00:18:14.690 { 00:18:14.690 "name": null, 00:18:14.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.690 "is_configured": false, 00:18:14.690 "data_offset": 0, 00:18:14.690 "data_size": 7936 00:18:14.690 }, 00:18:14.690 { 00:18:14.690 "name": "BaseBdev2", 00:18:14.690 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:14.690 "is_configured": true, 00:18:14.690 "data_offset": 256, 00:18:14.690 "data_size": 7936 00:18:14.690 } 00:18:14.690 ] 00:18:14.690 }' 00:18:14.690 20:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.690 [2024-11-26 20:31:08.097930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:14.690 [2024-11-26 20:31:08.098038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.690 [2024-11-26 20:31:08.098062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:14.690 [2024-11-26 20:31:08.098076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.690 [2024-11-26 20:31:08.098330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.690 [2024-11-26 20:31:08.098359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:14.690 [2024-11-26 20:31:08.098421] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:14.690 [2024-11-26 20:31:08.098447] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:14.690 [2024-11-26 20:31:08.098456] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:14.690 [2024-11-26 20:31:08.098488] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:14.690 BaseBdev1 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.690 20:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.666 "name": "raid_bdev1", 00:18:15.666 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:15.666 "strip_size_kb": 0, 00:18:15.666 "state": "online", 00:18:15.666 "raid_level": "raid1", 00:18:15.666 "superblock": true, 00:18:15.666 "num_base_bdevs": 2, 00:18:15.666 "num_base_bdevs_discovered": 1, 00:18:15.666 "num_base_bdevs_operational": 1, 00:18:15.666 "base_bdevs_list": [ 00:18:15.666 { 00:18:15.666 "name": null, 00:18:15.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.666 "is_configured": false, 00:18:15.666 "data_offset": 0, 00:18:15.666 "data_size": 7936 00:18:15.666 }, 00:18:15.666 { 00:18:15.666 "name": "BaseBdev2", 00:18:15.666 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:15.666 "is_configured": true, 00:18:15.666 "data_offset": 256, 00:18:15.666 "data_size": 7936 00:18:15.666 } 00:18:15.666 ] 00:18:15.666 }' 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.666 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.231 "name": "raid_bdev1", 00:18:16.231 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:16.231 "strip_size_kb": 0, 00:18:16.231 "state": "online", 00:18:16.231 "raid_level": "raid1", 00:18:16.231 "superblock": true, 00:18:16.231 "num_base_bdevs": 2, 00:18:16.231 "num_base_bdevs_discovered": 1, 00:18:16.231 "num_base_bdevs_operational": 1, 00:18:16.231 "base_bdevs_list": [ 00:18:16.231 { 00:18:16.231 "name": null, 00:18:16.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.231 "is_configured": false, 00:18:16.231 "data_offset": 0, 00:18:16.231 "data_size": 7936 00:18:16.231 }, 00:18:16.231 { 00:18:16.231 "name": "BaseBdev2", 00:18:16.231 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:16.231 "is_configured": true, 00:18:16.231 "data_offset": 256, 00:18:16.231 "data_size": 7936 00:18:16.231 } 00:18:16.231 ] 00:18:16.231 }' 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.231 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.231 [2024-11-26 20:31:09.747313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.231 [2024-11-26 20:31:09.747504] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:16.231 [2024-11-26 20:31:09.747518] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:16.231 request: 00:18:16.231 { 00:18:16.231 "base_bdev": "BaseBdev1", 00:18:16.231 "raid_bdev": "raid_bdev1", 00:18:16.231 "method": "bdev_raid_add_base_bdev", 00:18:16.231 "req_id": 1 00:18:16.231 } 00:18:16.231 Got JSON-RPC error response 00:18:16.231 response: 00:18:16.231 { 00:18:16.231 "code": -22, 00:18:16.231 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:16.232 } 00:18:16.232 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:16.232 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:16.232 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:16.232 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:16.232 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:16.232 20:31:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.607 "name": "raid_bdev1", 00:18:17.607 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:17.607 "strip_size_kb": 0, 00:18:17.607 "state": "online", 00:18:17.607 "raid_level": "raid1", 00:18:17.607 "superblock": true, 00:18:17.607 "num_base_bdevs": 2, 00:18:17.607 "num_base_bdevs_discovered": 1, 00:18:17.607 "num_base_bdevs_operational": 1, 00:18:17.607 "base_bdevs_list": [ 00:18:17.607 { 00:18:17.607 "name": null, 00:18:17.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.607 "is_configured": false, 00:18:17.607 "data_offset": 0, 00:18:17.607 "data_size": 7936 00:18:17.607 }, 00:18:17.607 { 00:18:17.607 "name": "BaseBdev2", 00:18:17.607 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:17.607 "is_configured": true, 00:18:17.607 "data_offset": 256, 00:18:17.607 "data_size": 7936 00:18:17.607 } 00:18:17.607 ] 00:18:17.607 }' 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.607 20:31:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.866 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.866 "name": "raid_bdev1", 00:18:17.866 "uuid": "c3e717f4-0772-4b33-a927-eace265d22f0", 00:18:17.866 "strip_size_kb": 0, 00:18:17.866 "state": "online", 00:18:17.866 "raid_level": "raid1", 00:18:17.866 "superblock": true, 00:18:17.866 "num_base_bdevs": 2, 00:18:17.866 "num_base_bdevs_discovered": 1, 00:18:17.866 "num_base_bdevs_operational": 1, 00:18:17.866 "base_bdevs_list": [ 00:18:17.867 { 00:18:17.867 "name": null, 00:18:17.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.867 "is_configured": false, 00:18:17.867 "data_offset": 0, 00:18:17.867 "data_size": 7936 00:18:17.867 }, 00:18:17.867 { 00:18:17.867 "name": "BaseBdev2", 00:18:17.867 "uuid": "49c561ff-296f-554d-a541-496d0bd4b6f6", 00:18:17.867 "is_configured": true, 00:18:17.867 "data_offset": 256, 00:18:17.867 "data_size": 7936 00:18:17.867 } 00:18:17.867 ] 00:18:17.867 }' 00:18:17.867 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.867 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.867 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.867 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.867 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99965 00:18:17.867 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99965 ']' 00:18:17.867 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99965 00:18:17.867 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:18.128 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.128 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99965 00:18:18.128 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.128 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.128 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99965' 00:18:18.128 killing process with pid 99965 00:18:18.128 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99965 00:18:18.128 Received shutdown signal, test time was about 60.000000 seconds 00:18:18.128 00:18:18.128 Latency(us) 00:18:18.128 [2024-11-26T20:31:11.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.128 [2024-11-26T20:31:11.680Z] =================================================================================================================== 00:18:18.128 [2024-11-26T20:31:11.680Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.128 [2024-11-26 20:31:11.458463] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.128 [2024-11-26 20:31:11.458638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.128 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99965 00:18:18.128 [2024-11-26 20:31:11.458732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.128 [2024-11-26 20:31:11.458746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:18:18.128 [2024-11-26 20:31:11.512371] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.386 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:18.386 00:18:18.386 real 0m17.011s 00:18:18.386 user 0m23.007s 00:18:18.386 sys 0m1.707s 00:18:18.386 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:18.386 20:31:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.386 ************************************ 00:18:18.386 END TEST raid_rebuild_test_sb_md_interleaved 00:18:18.386 ************************************ 00:18:18.386 20:31:11 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:18.386 20:31:11 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:18.386 20:31:11 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99965 ']' 00:18:18.386 20:31:11 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99965 00:18:18.386 20:31:11 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:18.386 00:18:18.386 real 10m30.586s 00:18:18.386 user 14m51.517s 00:18:18.386 sys 1m56.911s 00:18:18.645 20:31:11 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:18.645 20:31:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.645 ************************************ 00:18:18.645 END TEST bdev_raid 00:18:18.645 ************************************ 00:18:18.645 20:31:11 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:18.645 20:31:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:18.645 20:31:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:18.645 20:31:11 -- common/autotest_common.sh@10 -- # set +x 00:18:18.645 ************************************ 00:18:18.645 START TEST spdkcli_raid 00:18:18.645 ************************************ 00:18:18.645 20:31:12 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:18.645 * Looking for test storage... 00:18:18.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:18.645 20:31:12 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:18.645 20:31:12 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:18:18.645 20:31:12 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:18.645 20:31:12 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.645 20:31:12 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.904 20:31:12 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:18.904 20:31:12 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.904 20:31:12 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:18.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.904 --rc genhtml_branch_coverage=1 00:18:18.904 --rc genhtml_function_coverage=1 00:18:18.904 --rc genhtml_legend=1 00:18:18.905 --rc geninfo_all_blocks=1 00:18:18.905 --rc geninfo_unexecuted_blocks=1 00:18:18.905 00:18:18.905 ' 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.905 --rc genhtml_branch_coverage=1 00:18:18.905 --rc genhtml_function_coverage=1 00:18:18.905 --rc genhtml_legend=1 00:18:18.905 --rc geninfo_all_blocks=1 00:18:18.905 --rc geninfo_unexecuted_blocks=1 00:18:18.905 00:18:18.905 ' 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.905 --rc genhtml_branch_coverage=1 00:18:18.905 --rc genhtml_function_coverage=1 00:18:18.905 --rc genhtml_legend=1 00:18:18.905 --rc geninfo_all_blocks=1 00:18:18.905 --rc geninfo_unexecuted_blocks=1 00:18:18.905 00:18:18.905 ' 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.905 --rc genhtml_branch_coverage=1 00:18:18.905 --rc genhtml_function_coverage=1 00:18:18.905 --rc genhtml_legend=1 00:18:18.905 --rc geninfo_all_blocks=1 00:18:18.905 --rc geninfo_unexecuted_blocks=1 00:18:18.905 00:18:18.905 ' 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:18.905 20:31:12 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100638 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:18.905 20:31:12 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100638 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100638 ']' 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.905 20:31:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.905 [2024-11-26 20:31:12.349233] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:18.905 [2024-11-26 20:31:12.349468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100638 ] 00:18:19.164 [2024-11-26 20:31:12.513715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:19.164 [2024-11-26 20:31:12.601764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.164 [2024-11-26 20:31:12.601841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.732 20:31:13 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.732 20:31:13 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:18:19.732 20:31:13 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:19.732 20:31:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.732 20:31:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.732 20:31:13 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:19.732 20:31:13 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.732 20:31:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.732 20:31:13 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:19.732 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:19.732 ' 00:18:21.633 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:21.633 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:21.633 20:31:14 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:21.633 20:31:14 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.633 20:31:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.633 20:31:14 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:21.633 20:31:14 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:21.633 20:31:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.633 20:31:14 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:21.633 ' 00:18:22.568 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:22.834 20:31:16 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:22.834 20:31:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:22.834 20:31:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.834 20:31:16 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:22.834 20:31:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:22.834 20:31:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.834 20:31:16 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:22.834 20:31:16 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:23.408 20:31:16 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:23.408 20:31:16 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:23.408 20:31:16 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:23.408 20:31:16 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.408 20:31:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.408 20:31:16 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:23.408 20:31:16 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.408 20:31:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.408 20:31:16 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:23.408 ' 00:18:24.343 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:24.602 20:31:17 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:24.602 20:31:17 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:24.602 20:31:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.602 20:31:17 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:24.602 20:31:17 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.602 20:31:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.602 20:31:18 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:24.602 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:24.602 ' 00:18:25.980 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:25.980 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:25.980 20:31:19 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:25.980 20:31:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.980 20:31:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.980 20:31:19 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100638 00:18:25.980 20:31:19 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100638 ']' 00:18:25.980 20:31:19 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100638 00:18:25.981 20:31:19 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:18:25.981 20:31:19 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.981 20:31:19 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100638 00:18:25.981 killing process with pid 100638 00:18:25.981 20:31:19 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:25.981 20:31:19 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:25.981 20:31:19 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100638' 00:18:25.981 20:31:19 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100638 00:18:25.981 20:31:19 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100638 00:18:26.919 Process with pid 100638 is not found 00:18:26.919 20:31:20 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:26.919 20:31:20 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100638 ']' 00:18:26.919 20:31:20 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100638 00:18:26.919 20:31:20 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100638 ']' 00:18:26.919 20:31:20 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100638 00:18:26.919 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100638) - No such process 00:18:26.919 20:31:20 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100638 is not found' 00:18:26.919 20:31:20 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:26.919 20:31:20 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:26.919 20:31:20 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:26.919 20:31:20 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:26.919 00:18:26.919 real 0m8.105s 00:18:26.919 user 0m16.798s 00:18:26.919 sys 0m1.302s 00:18:26.919 20:31:20 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.919 20:31:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.919 ************************************ 00:18:26.919 END TEST spdkcli_raid 00:18:26.919 ************************************ 00:18:26.919 20:31:20 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:26.919 20:31:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:26.919 20:31:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.919 20:31:20 -- common/autotest_common.sh@10 -- # set +x 00:18:26.919 ************************************ 00:18:26.919 START TEST blockdev_raid5f 00:18:26.919 ************************************ 00:18:26.919 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:26.919 * Looking for test storage... 00:18:26.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:26.919 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:26.919 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:18:26.919 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:26.919 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.919 20:31:20 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:26.919 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.919 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:26.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.919 --rc genhtml_branch_coverage=1 00:18:26.919 --rc genhtml_function_coverage=1 00:18:26.919 --rc genhtml_legend=1 00:18:26.919 --rc geninfo_all_blocks=1 00:18:26.919 --rc geninfo_unexecuted_blocks=1 00:18:26.919 00:18:26.919 ' 00:18:26.919 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:26.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.919 --rc genhtml_branch_coverage=1 00:18:26.919 --rc genhtml_function_coverage=1 00:18:26.919 --rc genhtml_legend=1 00:18:26.920 --rc geninfo_all_blocks=1 00:18:26.920 --rc geninfo_unexecuted_blocks=1 00:18:26.920 00:18:26.920 ' 00:18:26.920 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:26.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.920 --rc genhtml_branch_coverage=1 00:18:26.920 --rc genhtml_function_coverage=1 00:18:26.920 --rc genhtml_legend=1 00:18:26.920 --rc geninfo_all_blocks=1 00:18:26.920 --rc geninfo_unexecuted_blocks=1 00:18:26.920 00:18:26.920 ' 00:18:26.920 20:31:20 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:26.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.920 --rc genhtml_branch_coverage=1 00:18:26.920 --rc genhtml_function_coverage=1 00:18:26.920 --rc genhtml_legend=1 00:18:26.920 --rc geninfo_all_blocks=1 00:18:26.920 --rc geninfo_unexecuted_blocks=1 00:18:26.920 00:18:26.920 ' 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100901 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:26.920 20:31:20 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100901 00:18:26.920 20:31:20 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100901 ']' 00:18:26.920 20:31:20 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.920 20:31:20 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:26.920 20:31:20 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.920 20:31:20 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:26.920 20:31:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:27.179 [2024-11-26 20:31:20.492130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:27.179 [2024-11-26 20:31:20.492330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100901 ] 00:18:27.179 [2024-11-26 20:31:20.651060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.179 [2024-11-26 20:31:20.729174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:28.119 Malloc0 00:18:28.119 Malloc1 00:18:28.119 Malloc2 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6f04a4ee-b198-443d-b0b3-6c8e96cc73b9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6f04a4ee-b198-443d-b0b3-6c8e96cc73b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6f04a4ee-b198-443d-b0b3-6c8e96cc73b9",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e220b489-38d7-4bee-ba74-49ac55508b9e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2fc5e69f-0cbe-4902-a01d-8cff83e0f558",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d9d78c4a-e6dd-4618-a351-4db2203ad87e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:28.119 20:31:21 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100901 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100901 ']' 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100901 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100901 00:18:28.119 killing process with pid 100901 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100901' 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100901 00:18:28.119 20:31:21 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100901 00:18:28.688 20:31:22 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:28.688 20:31:22 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:28.688 20:31:22 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:28.688 20:31:22 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:28.688 20:31:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:28.688 ************************************ 00:18:28.688 START TEST bdev_hello_world 00:18:28.688 ************************************ 00:18:28.688 20:31:22 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:28.948 [2024-11-26 20:31:22.301697] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:28.948 [2024-11-26 20:31:22.301852] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100946 ] 00:18:28.948 [2024-11-26 20:31:22.462094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.207 [2024-11-26 20:31:22.540791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.466 [2024-11-26 20:31:22.767302] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:29.466 [2024-11-26 20:31:22.767360] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:29.466 [2024-11-26 20:31:22.767387] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:29.466 [2024-11-26 20:31:22.767824] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:29.466 [2024-11-26 20:31:22.767989] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:29.466 [2024-11-26 20:31:22.768012] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:29.466 [2024-11-26 20:31:22.768094] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:29.466 00:18:29.466 [2024-11-26 20:31:22.768118] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:29.726 ************************************ 00:18:29.726 END TEST bdev_hello_world 00:18:29.726 ************************************ 00:18:29.726 00:18:29.726 real 0m0.923s 00:18:29.726 user 0m0.524s 00:18:29.726 sys 0m0.284s 00:18:29.726 20:31:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.726 20:31:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:29.726 20:31:23 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:29.726 20:31:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:29.726 20:31:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.726 20:31:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:29.726 ************************************ 00:18:29.726 START TEST bdev_bounds 00:18:29.726 ************************************ 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100980 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100980' 00:18:29.726 Process bdevio pid: 100980 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100980 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100980 ']' 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.726 20:31:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:29.986 [2024-11-26 20:31:23.289082] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:29.986 [2024-11-26 20:31:23.289319] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100980 ] 00:18:29.986 [2024-11-26 20:31:23.440689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:29.986 [2024-11-26 20:31:23.524677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.986 [2024-11-26 20:31:23.524700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.986 [2024-11-26 20:31:23.524828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.921 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.921 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:18:30.921 20:31:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:30.921 I/O targets: 00:18:30.921 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:30.921 00:18:30.921 00:18:30.921 CUnit - A unit testing framework for C - Version 2.1-3 00:18:30.921 http://cunit.sourceforge.net/ 00:18:30.921 00:18:30.921 00:18:30.921 Suite: bdevio tests on: raid5f 00:18:30.921 Test: blockdev write read block ...passed 00:18:30.921 Test: blockdev write zeroes read block ...passed 00:18:30.921 Test: blockdev write zeroes read no split ...passed 00:18:30.921 Test: blockdev write zeroes read split ...passed 00:18:31.181 Test: blockdev write zeroes read split partial ...passed 00:18:31.181 Test: blockdev reset ...passed 00:18:31.181 Test: blockdev write read 8 blocks ...passed 00:18:31.181 Test: blockdev write read size > 128k ...passed 00:18:31.181 Test: blockdev write read invalid size ...passed 00:18:31.181 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:31.181 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:31.181 Test: blockdev write read max offset ...passed 00:18:31.181 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:31.181 Test: blockdev writev readv 8 blocks ...passed 00:18:31.181 Test: blockdev writev readv 30 x 1block ...passed 00:18:31.181 Test: blockdev writev readv block ...passed 00:18:31.181 Test: blockdev writev readv size > 128k ...passed 00:18:31.181 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:31.181 Test: blockdev comparev and writev ...passed 00:18:31.181 Test: blockdev nvme passthru rw ...passed 00:18:31.181 Test: blockdev nvme passthru vendor specific ...passed 00:18:31.181 Test: blockdev nvme admin passthru ...passed 00:18:31.181 Test: blockdev copy ...passed 00:18:31.181 00:18:31.181 Run Summary: Type Total Ran Passed Failed Inactive 00:18:31.181 suites 1 1 n/a 0 0 00:18:31.181 tests 23 23 23 0 0 00:18:31.181 asserts 130 130 130 0 n/a 00:18:31.181 00:18:31.181 Elapsed time = 0.419 seconds 00:18:31.181 0 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100980 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100980 ']' 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100980 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100980 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100980' 00:18:31.181 killing process with pid 100980 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100980 00:18:31.181 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100980 00:18:31.440 20:31:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:31.440 00:18:31.440 real 0m1.773s 00:18:31.440 user 0m4.199s 00:18:31.440 sys 0m0.432s 00:18:31.440 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.440 20:31:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:31.440 ************************************ 00:18:31.440 END TEST bdev_bounds 00:18:31.440 ************************************ 00:18:31.699 20:31:25 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:31.699 20:31:25 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:31.699 20:31:25 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:31.699 20:31:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.699 ************************************ 00:18:31.699 START TEST bdev_nbd 00:18:31.699 ************************************ 00:18:31.699 20:31:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:31.699 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:31.699 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=101023 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 101023 /var/tmp/spdk-nbd.sock 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 101023 ']' 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:31.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:31.700 20:31:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:31.700 [2024-11-26 20:31:25.117127] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:31.700 [2024-11-26 20:31:25.117278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.958 [2024-11-26 20:31:25.270290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.958 [2024-11-26 20:31:25.362988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:32.893 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.894 1+0 records in 00:18:32.894 1+0 records out 00:18:32.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291598 s, 14.0 MB/s 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:32.894 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:33.153 { 00:18:33.153 "nbd_device": "/dev/nbd0", 00:18:33.153 "bdev_name": "raid5f" 00:18:33.153 } 00:18:33.153 ]' 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:33.153 { 00:18:33.153 "nbd_device": "/dev/nbd0", 00:18:33.153 "bdev_name": "raid5f" 00:18:33.153 } 00:18:33.153 ]' 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.153 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.721 20:31:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:33.721 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:33.721 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:33.721 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:33.980 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:34.240 /dev/nbd0 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.240 1+0 records in 00:18:34.240 1+0 records out 00:18:34.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477493 s, 8.6 MB/s 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:34.240 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:34.500 { 00:18:34.500 "nbd_device": "/dev/nbd0", 00:18:34.500 "bdev_name": "raid5f" 00:18:34.500 } 00:18:34.500 ]' 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:34.500 { 00:18:34.500 "nbd_device": "/dev/nbd0", 00:18:34.500 "bdev_name": "raid5f" 00:18:34.500 } 00:18:34.500 ]' 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:34.500 256+0 records in 00:18:34.500 256+0 records out 00:18:34.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139992 s, 74.9 MB/s 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:34.500 256+0 records in 00:18:34.500 256+0 records out 00:18:34.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332338 s, 31.6 MB/s 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.500 20:31:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:34.759 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:35.018 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:35.277 malloc_lvol_verify 00:18:35.277 20:31:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:35.537 3125e7b9-a1f6-40a2-a910-3d83897b546d 00:18:35.537 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:35.797 b4c8e1ad-bcc3-4b57-9d10-99497ea016cd 00:18:35.797 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:36.057 /dev/nbd0 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:36.057 mke2fs 1.47.0 (5-Feb-2023) 00:18:36.057 Discarding device blocks: 0/4096 done 00:18:36.057 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:36.057 00:18:36.057 Allocating group tables: 0/1 done 00:18:36.057 Writing inode tables: 0/1 done 00:18:36.057 Creating journal (1024 blocks): done 00:18:36.057 Writing superblocks and filesystem accounting information: 0/1 done 00:18:36.057 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:36.057 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:36.316 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:36.316 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:36.316 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:36.316 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 101023 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 101023 ']' 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 101023 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101023 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:36.317 killing process with pid 101023 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101023' 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 101023 00:18:36.317 20:31:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 101023 00:18:36.887 20:31:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:36.887 00:18:36.887 real 0m5.242s 00:18:36.887 user 0m7.789s 00:18:36.887 sys 0m1.417s 00:18:36.887 20:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.887 20:31:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:36.887 ************************************ 00:18:36.887 END TEST bdev_nbd 00:18:36.887 ************************************ 00:18:36.887 20:31:30 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:36.887 20:31:30 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:36.887 20:31:30 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:36.887 20:31:30 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:36.887 20:31:30 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:36.887 20:31:30 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.887 20:31:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:36.887 ************************************ 00:18:36.887 START TEST bdev_fio 00:18:36.887 ************************************ 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:36.887 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.887 20:31:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:37.147 ************************************ 00:18:37.147 START TEST bdev_fio_rw_verify 00:18:37.147 ************************************ 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:37.147 20:31:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.147 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:37.147 fio-3.35 00:18:37.147 Starting 1 thread 00:18:49.415 00:18:49.415 job_raid5f: (groupid=0, jobs=1): err= 0: pid=101221: Tue Nov 26 20:31:41 2024 00:18:49.415 read: IOPS=7987, BW=31.2MiB/s (32.7MB/s)(312MiB/10001msec) 00:18:49.415 slat (usec): min=22, max=173, avg=29.04, stdev= 3.55 00:18:49.415 clat (usec): min=14, max=598, avg=195.48, stdev=70.07 00:18:49.415 lat (usec): min=43, max=630, avg=224.51, stdev=70.82 00:18:49.415 clat percentiles (usec): 00:18:49.415 | 50.000th=[ 202], 99.000th=[ 351], 99.900th=[ 392], 99.990th=[ 445], 00:18:49.415 | 99.999th=[ 603] 00:18:49.415 write: IOPS=8415, BW=32.9MiB/s (34.5MB/s)(325MiB/9882msec); 0 zone resets 00:18:49.415 slat (usec): min=11, max=168, avg=26.48, stdev= 7.20 00:18:49.415 clat (usec): min=85, max=914, avg=452.54, stdev=68.82 00:18:49.415 lat (usec): min=108, max=1060, avg=479.02, stdev=71.60 00:18:49.415 clat percentiles (usec): 00:18:49.415 | 50.000th=[ 453], 99.000th=[ 668], 99.900th=[ 734], 99.990th=[ 807], 00:18:49.415 | 99.999th=[ 914] 00:18:49.415 bw ( KiB/s): min=25480, max=35936, per=98.46%, avg=33146.11, stdev=2677.75, samples=19 00:18:49.415 iops : min= 6370, max= 8984, avg=8286.53, stdev=669.44, samples=19 00:18:49.415 lat (usec) : 20=0.01%, 100=5.40%, 250=30.47%, 500=55.27%, 750=8.84% 00:18:49.415 lat (usec) : 1000=0.03% 00:18:49.415 cpu : usr=98.64%, sys=0.49%, ctx=26, majf=0, minf=10131 00:18:49.415 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.415 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.415 issued rwts: total=79880,83166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:49.415 00:18:49.415 Run status group 0 (all jobs): 00:18:49.415 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=312MiB (327MB), run=10001-10001msec 00:18:49.415 WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=325MiB (341MB), run=9882-9882msec 00:18:49.415 ----------------------------------------------------- 00:18:49.415 Suppressions used: 00:18:49.415 count bytes template 00:18:49.415 1 7 /usr/src/fio/parse.c 00:18:49.415 786 75456 /usr/src/fio/iolog.c 00:18:49.415 1 8 libtcmalloc_minimal.so 00:18:49.415 1 904 libcrypto.so 00:18:49.415 ----------------------------------------------------- 00:18:49.415 00:18:49.415 00:18:49.415 real 0m11.464s 00:18:49.415 user 0m11.663s 00:18:49.415 sys 0m0.825s 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:49.415 ************************************ 00:18:49.415 END TEST bdev_fio_rw_verify 00:18:49.415 ************************************ 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6f04a4ee-b198-443d-b0b3-6c8e96cc73b9"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6f04a4ee-b198-443d-b0b3-6c8e96cc73b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6f04a4ee-b198-443d-b0b3-6c8e96cc73b9",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "e220b489-38d7-4bee-ba74-49ac55508b9e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2fc5e69f-0cbe-4902-a01d-8cff83e0f558",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d9d78c4a-e6dd-4618-a351-4db2203ad87e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:49.415 20:31:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:49.415 20:31:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:49.415 20:31:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:49.415 20:31:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:49.415 /home/vagrant/spdk_repo/spdk 00:18:49.415 20:31:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:49.415 20:31:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:49.415 00:18:49.415 real 0m11.716s 00:18:49.415 user 0m11.765s 00:18:49.415 sys 0m0.941s 00:18:49.415 20:31:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.415 20:31:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:49.415 ************************************ 00:18:49.415 END TEST bdev_fio 00:18:49.415 ************************************ 00:18:49.415 20:31:42 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:49.415 20:31:42 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:49.415 20:31:42 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:49.415 20:31:42 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.415 20:31:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:49.415 ************************************ 00:18:49.415 START TEST bdev_verify 00:18:49.415 ************************************ 00:18:49.415 20:31:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:49.415 [2024-11-26 20:31:42.169990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:49.415 [2024-11-26 20:31:42.170145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101379 ] 00:18:49.415 [2024-11-26 20:31:42.337831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:49.415 [2024-11-26 20:31:42.401032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.415 [2024-11-26 20:31:42.401196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.415 Running I/O for 5 seconds... 00:18:51.300 10392.00 IOPS, 40.59 MiB/s [2024-11-26T20:31:45.789Z] 10955.00 IOPS, 42.79 MiB/s [2024-11-26T20:31:46.725Z] 10357.00 IOPS, 40.46 MiB/s [2024-11-26T20:31:47.685Z] 9941.00 IOPS, 38.83 MiB/s [2024-11-26T20:31:47.942Z] 9783.40 IOPS, 38.22 MiB/s 00:18:54.390 Latency(us) 00:18:54.390 [2024-11-26T20:31:47.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.390 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:54.390 Verification LBA range: start 0x0 length 0x2000 00:18:54.390 raid5f : 5.01 4827.62 18.86 0.00 0.00 39556.34 208.38 39378.84 00:18:54.390 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:54.390 Verification LBA range: start 0x2000 length 0x2000 00:18:54.390 raid5f : 5.04 4937.82 19.29 0.00 0.00 39009.38 465.05 33197.28 00:18:54.390 [2024-11-26T20:31:47.942Z] =================================================================================================================== 00:18:54.390 [2024-11-26T20:31:47.942Z] Total : 9765.44 38.15 0.00 0.00 39279.08 208.38 39378.84 00:18:54.724 00:18:54.724 real 0m5.991s 00:18:54.724 user 0m10.731s 00:18:54.724 sys 0m0.435s 00:18:54.724 20:31:48 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:54.724 20:31:48 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:54.724 ************************************ 00:18:54.724 END TEST bdev_verify 00:18:54.724 ************************************ 00:18:54.724 20:31:48 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:54.724 20:31:48 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:54.724 20:31:48 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:54.724 20:31:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.724 ************************************ 00:18:54.724 START TEST bdev_verify_big_io 00:18:54.724 ************************************ 00:18:54.724 20:31:48 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:54.724 [2024-11-26 20:31:48.223325] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:54.724 [2024-11-26 20:31:48.223480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101466 ] 00:18:54.981 [2024-11-26 20:31:48.377253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:54.981 [2024-11-26 20:31:48.466751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.981 [2024-11-26 20:31:48.466844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.240 Running I/O for 5 seconds... 00:18:57.559 568.00 IOPS, 35.50 MiB/s [2024-11-26T20:31:52.059Z] 727.50 IOPS, 45.47 MiB/s [2024-11-26T20:31:53.000Z] 761.33 IOPS, 47.58 MiB/s [2024-11-26T20:31:53.937Z] 777.00 IOPS, 48.56 MiB/s [2024-11-26T20:31:54.197Z] 786.80 IOPS, 49.17 MiB/s 00:19:00.645 Latency(us) 00:19:00.645 [2024-11-26T20:31:54.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.645 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:00.645 Verification LBA range: start 0x0 length 0x200 00:19:00.645 raid5f : 5.15 394.58 24.66 0.00 0.00 7959847.80 196.75 349830.60 00:19:00.645 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:00.645 Verification LBA range: start 0x200 length 0x200 00:19:00.645 raid5f : 5.23 376.28 23.52 0.00 0.00 8227701.21 197.65 391956.79 00:19:00.645 [2024-11-26T20:31:54.197Z] =================================================================================================================== 00:19:00.645 [2024-11-26T20:31:54.197Z] Total : 770.86 48.18 0.00 0.00 8091597.65 196.75 391956.79 00:19:00.904 00:19:00.904 real 0m6.211s 00:19:00.904 user 0m11.192s 00:19:00.904 sys 0m0.402s 00:19:00.904 20:31:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:00.904 20:31:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.904 ************************************ 00:19:00.904 END TEST bdev_verify_big_io 00:19:00.904 ************************************ 00:19:00.904 20:31:54 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:00.904 20:31:54 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:00.904 20:31:54 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:00.904 20:31:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.904 ************************************ 00:19:00.904 START TEST bdev_write_zeroes 00:19:00.904 ************************************ 00:19:00.904 20:31:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:01.165 [2024-11-26 20:31:54.505524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:01.165 [2024-11-26 20:31:54.505722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101548 ] 00:19:01.165 [2024-11-26 20:31:54.678213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.425 [2024-11-26 20:31:54.740366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.425 Running I/O for 1 seconds... 00:19:02.818 19791.00 IOPS, 77.31 MiB/s 00:19:02.818 Latency(us) 00:19:02.818 [2024-11-26T20:31:56.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.818 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:02.818 raid5f : 1.01 19774.03 77.24 0.00 0.00 6447.33 2074.83 8413.79 00:19:02.818 [2024-11-26T20:31:56.370Z] =================================================================================================================== 00:19:02.818 [2024-11-26T20:31:56.370Z] Total : 19774.03 77.24 0.00 0.00 6447.33 2074.83 8413.79 00:19:02.818 00:19:02.818 real 0m1.898s 00:19:02.818 user 0m1.489s 00:19:02.818 sys 0m0.288s 00:19:02.818 20:31:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.818 20:31:56 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:02.818 ************************************ 00:19:02.818 END TEST bdev_write_zeroes 00:19:02.818 ************************************ 00:19:02.818 20:31:56 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:02.818 20:31:56 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:02.818 20:31:56 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.819 20:31:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.078 ************************************ 00:19:03.078 START TEST bdev_json_nonenclosed 00:19:03.078 ************************************ 00:19:03.078 20:31:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:03.078 [2024-11-26 20:31:56.489872] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:03.078 [2024-11-26 20:31:56.490044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101590 ] 00:19:03.338 [2024-11-26 20:31:56.645762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.338 [2024-11-26 20:31:56.726694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.338 [2024-11-26 20:31:56.726816] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:03.338 [2024-11-26 20:31:56.726853] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:03.338 [2024-11-26 20:31:56.726875] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:03.596 ************************************ 00:19:03.596 END TEST bdev_json_nonenclosed 00:19:03.596 ************************************ 00:19:03.596 00:19:03.596 real 0m0.520s 00:19:03.596 user 0m0.250s 00:19:03.596 sys 0m0.165s 00:19:03.596 20:31:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.596 20:31:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:03.596 20:31:56 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:03.596 20:31:56 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:03.596 20:31:56 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.596 20:31:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.596 ************************************ 00:19:03.596 START TEST bdev_json_nonarray 00:19:03.596 ************************************ 00:19:03.596 20:31:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:03.596 [2024-11-26 20:31:57.048153] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:03.596 [2024-11-26 20:31:57.048299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101620 ] 00:19:03.858 [2024-11-26 20:31:57.212866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.858 [2024-11-26 20:31:57.291955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.858 [2024-11-26 20:31:57.292069] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:03.858 [2024-11-26 20:31:57.292113] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:03.858 [2024-11-26 20:31:57.292126] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:04.117 00:19:04.117 real 0m0.492s 00:19:04.117 user 0m0.252s 00:19:04.117 sys 0m0.135s 00:19:04.117 20:31:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.117 20:31:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:04.117 ************************************ 00:19:04.117 END TEST bdev_json_nonarray 00:19:04.117 ************************************ 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:04.117 20:31:57 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:04.117 00:19:04.117 real 0m37.351s 00:19:04.117 user 0m50.225s 00:19:04.118 sys 0m5.577s 00:19:04.118 20:31:57 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.118 20:31:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:04.118 ************************************ 00:19:04.118 END TEST blockdev_raid5f 00:19:04.118 ************************************ 00:19:04.118 20:31:57 -- spdk/autotest.sh@194 -- # uname -s 00:19:04.118 20:31:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:04.118 20:31:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:04.118 20:31:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:04.118 20:31:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:04.118 20:31:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:04.118 20:31:57 -- common/autotest_common.sh@10 -- # set +x 00:19:04.118 20:31:57 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:04.118 20:31:57 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:19:04.118 20:31:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:04.118 20:31:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:04.118 20:31:57 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:19:04.118 20:31:57 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:19:04.118 20:31:57 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:19:04.118 20:31:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:04.118 20:31:57 -- common/autotest_common.sh@10 -- # set +x 00:19:04.118 20:31:57 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:19:04.118 20:31:57 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:19:04.118 20:31:57 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:04.118 20:31:57 -- common/autotest_common.sh@10 -- # set +x 00:19:06.686 INFO: APP EXITING 00:19:06.687 INFO: killing all VMs 00:19:06.687 INFO: killing vhost app 00:19:06.687 INFO: EXIT DONE 00:19:06.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:06.687 Waiting for block devices as requested 00:19:06.687 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:06.946 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:07.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:07.884 Cleaning 00:19:07.884 Removing: /var/run/dpdk/spdk0/config 00:19:07.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:07.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:07.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:07.884 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:07.884 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:07.884 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:07.884 Removing: /dev/shm/spdk_tgt_trace.pid69380 00:19:07.884 Removing: /var/run/dpdk/spdk0 00:19:07.884 Removing: /var/run/dpdk/spdk_pid100638 00:19:07.884 Removing: /var/run/dpdk/spdk_pid100901 00:19:07.884 Removing: /var/run/dpdk/spdk_pid100946 00:19:07.884 Removing: /var/run/dpdk/spdk_pid100980 00:19:07.884 Removing: /var/run/dpdk/spdk_pid101212 00:19:07.884 Removing: /var/run/dpdk/spdk_pid101379 00:19:07.884 Removing: /var/run/dpdk/spdk_pid101466 00:19:07.884 Removing: /var/run/dpdk/spdk_pid101548 00:19:07.884 Removing: /var/run/dpdk/spdk_pid101590 00:19:07.884 Removing: /var/run/dpdk/spdk_pid101620 00:19:07.884 Removing: /var/run/dpdk/spdk_pid69211 00:19:07.884 Removing: /var/run/dpdk/spdk_pid69380 00:19:07.884 Removing: /var/run/dpdk/spdk_pid69587 00:19:07.884 Removing: /var/run/dpdk/spdk_pid69674 00:19:07.884 Removing: /var/run/dpdk/spdk_pid69703 00:19:07.884 Removing: /var/run/dpdk/spdk_pid69820 00:19:07.884 Removing: /var/run/dpdk/spdk_pid69838 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70026 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70106 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70191 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70291 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70377 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70411 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70453 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70524 00:19:07.884 Removing: /var/run/dpdk/spdk_pid70630 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71076 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71131 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71183 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71194 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71274 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71289 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71359 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71374 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71417 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71435 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71487 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71501 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71639 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71675 00:19:07.884 Removing: /var/run/dpdk/spdk_pid71759 00:19:08.144 Removing: /var/run/dpdk/spdk_pid72951 00:19:08.144 Removing: /var/run/dpdk/spdk_pid73152 00:19:08.144 Removing: /var/run/dpdk/spdk_pid73286 00:19:08.144 Removing: /var/run/dpdk/spdk_pid73896 00:19:08.144 Removing: /var/run/dpdk/spdk_pid74097 00:19:08.144 Removing: /var/run/dpdk/spdk_pid74226 00:19:08.144 Removing: /var/run/dpdk/spdk_pid74847 00:19:08.144 Removing: /var/run/dpdk/spdk_pid75172 00:19:08.144 Removing: /var/run/dpdk/spdk_pid75301 00:19:08.144 Removing: /var/run/dpdk/spdk_pid76653 00:19:08.144 Removing: /var/run/dpdk/spdk_pid76895 00:19:08.144 Removing: /var/run/dpdk/spdk_pid77029 00:19:08.144 Removing: /var/run/dpdk/spdk_pid78376 00:19:08.144 Removing: /var/run/dpdk/spdk_pid78618 00:19:08.144 Removing: /var/run/dpdk/spdk_pid78758 00:19:08.144 Removing: /var/run/dpdk/spdk_pid80104 00:19:08.144 Removing: /var/run/dpdk/spdk_pid80539 00:19:08.144 Removing: /var/run/dpdk/spdk_pid80674 00:19:08.144 Removing: /var/run/dpdk/spdk_pid82126 00:19:08.144 Removing: /var/run/dpdk/spdk_pid82374 00:19:08.144 Removing: /var/run/dpdk/spdk_pid82509 00:19:08.144 Removing: /var/run/dpdk/spdk_pid83955 00:19:08.144 Removing: /var/run/dpdk/spdk_pid84209 00:19:08.144 Removing: /var/run/dpdk/spdk_pid84338 00:19:08.144 Removing: /var/run/dpdk/spdk_pid85791 00:19:08.144 Removing: /var/run/dpdk/spdk_pid86268 00:19:08.144 Removing: /var/run/dpdk/spdk_pid86408 00:19:08.144 Removing: /var/run/dpdk/spdk_pid86541 00:19:08.144 Removing: /var/run/dpdk/spdk_pid86952 00:19:08.144 Removing: /var/run/dpdk/spdk_pid87674 00:19:08.144 Removing: /var/run/dpdk/spdk_pid88033 00:19:08.144 Removing: /var/run/dpdk/spdk_pid88711 00:19:08.144 Removing: /var/run/dpdk/spdk_pid89146 00:19:08.144 Removing: /var/run/dpdk/spdk_pid89898 00:19:08.144 Removing: /var/run/dpdk/spdk_pid90314 00:19:08.144 Removing: /var/run/dpdk/spdk_pid92242 00:19:08.144 Removing: /var/run/dpdk/spdk_pid92676 00:19:08.144 Removing: /var/run/dpdk/spdk_pid93100 00:19:08.144 Removing: /var/run/dpdk/spdk_pid95133 00:19:08.144 Removing: /var/run/dpdk/spdk_pid95607 00:19:08.144 Removing: /var/run/dpdk/spdk_pid96118 00:19:08.144 Removing: /var/run/dpdk/spdk_pid97154 00:19:08.144 Removing: /var/run/dpdk/spdk_pid97472 00:19:08.144 Removing: /var/run/dpdk/spdk_pid98405 00:19:08.144 Removing: /var/run/dpdk/spdk_pid98720 00:19:08.144 Removing: /var/run/dpdk/spdk_pid99653 00:19:08.144 Removing: /var/run/dpdk/spdk_pid99965 00:19:08.144 Clean 00:19:08.404 20:32:01 -- common/autotest_common.sh@1451 -- # return 0 00:19:08.404 20:32:01 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:19:08.404 20:32:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.404 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:19:08.404 20:32:01 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:19:08.404 20:32:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.404 20:32:01 -- common/autotest_common.sh@10 -- # set +x 00:19:08.404 20:32:01 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:08.404 20:32:01 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:08.404 20:32:01 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:08.404 20:32:01 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:19:08.404 20:32:01 -- spdk/autotest.sh@394 -- # hostname 00:19:08.404 20:32:01 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:08.662 geninfo: WARNING: invalid characters removed from testname! 00:19:35.228 20:32:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:36.169 20:32:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:38.704 20:32:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:41.254 20:32:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:43.162 20:32:36 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:45.701 20:32:38 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:47.608 20:32:41 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:47.868 20:32:41 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:19:47.868 20:32:41 -- common/autotest_common.sh@1681 -- $ lcov --version 00:19:47.868 20:32:41 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:19:47.868 20:32:41 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:19:47.868 20:32:41 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:19:47.868 20:32:41 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:19:47.868 20:32:41 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:19:47.868 20:32:41 -- scripts/common.sh@336 -- $ IFS=.-: 00:19:47.868 20:32:41 -- scripts/common.sh@336 -- $ read -ra ver1 00:19:47.868 20:32:41 -- scripts/common.sh@337 -- $ IFS=.-: 00:19:47.868 20:32:41 -- scripts/common.sh@337 -- $ read -ra ver2 00:19:47.868 20:32:41 -- scripts/common.sh@338 -- $ local 'op=<' 00:19:47.868 20:32:41 -- scripts/common.sh@340 -- $ ver1_l=2 00:19:47.868 20:32:41 -- scripts/common.sh@341 -- $ ver2_l=1 00:19:47.868 20:32:41 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:19:47.868 20:32:41 -- scripts/common.sh@344 -- $ case "$op" in 00:19:47.868 20:32:41 -- scripts/common.sh@345 -- $ : 1 00:19:47.868 20:32:41 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:19:47.868 20:32:41 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.868 20:32:41 -- scripts/common.sh@365 -- $ decimal 1 00:19:47.868 20:32:41 -- scripts/common.sh@353 -- $ local d=1 00:19:47.868 20:32:41 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:19:47.868 20:32:41 -- scripts/common.sh@355 -- $ echo 1 00:19:47.868 20:32:41 -- scripts/common.sh@365 -- $ ver1[v]=1 00:19:47.868 20:32:41 -- scripts/common.sh@366 -- $ decimal 2 00:19:47.868 20:32:41 -- scripts/common.sh@353 -- $ local d=2 00:19:47.868 20:32:41 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:19:47.868 20:32:41 -- scripts/common.sh@355 -- $ echo 2 00:19:47.868 20:32:41 -- scripts/common.sh@366 -- $ ver2[v]=2 00:19:47.868 20:32:41 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:19:47.868 20:32:41 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:19:47.868 20:32:41 -- scripts/common.sh@368 -- $ return 0 00:19:47.868 20:32:41 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.868 20:32:41 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:19:47.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.868 --rc genhtml_branch_coverage=1 00:19:47.868 --rc genhtml_function_coverage=1 00:19:47.868 --rc genhtml_legend=1 00:19:47.868 --rc geninfo_all_blocks=1 00:19:47.868 --rc geninfo_unexecuted_blocks=1 00:19:47.868 00:19:47.868 ' 00:19:47.868 20:32:41 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:19:47.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.868 --rc genhtml_branch_coverage=1 00:19:47.868 --rc genhtml_function_coverage=1 00:19:47.868 --rc genhtml_legend=1 00:19:47.868 --rc geninfo_all_blocks=1 00:19:47.868 --rc geninfo_unexecuted_blocks=1 00:19:47.868 00:19:47.868 ' 00:19:47.868 20:32:41 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:19:47.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.868 --rc genhtml_branch_coverage=1 00:19:47.868 --rc genhtml_function_coverage=1 00:19:47.868 --rc genhtml_legend=1 00:19:47.868 --rc geninfo_all_blocks=1 00:19:47.868 --rc geninfo_unexecuted_blocks=1 00:19:47.868 00:19:47.868 ' 00:19:47.868 20:32:41 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:19:47.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.868 --rc genhtml_branch_coverage=1 00:19:47.868 --rc genhtml_function_coverage=1 00:19:47.868 --rc genhtml_legend=1 00:19:47.868 --rc geninfo_all_blocks=1 00:19:47.868 --rc geninfo_unexecuted_blocks=1 00:19:47.868 00:19:47.868 ' 00:19:47.868 20:32:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.868 20:32:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:19:47.868 20:32:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:47.868 20:32:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.868 20:32:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.868 20:32:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.868 20:32:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.868 20:32:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.868 20:32:41 -- paths/export.sh@5 -- $ export PATH 00:19:47.868 20:32:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.868 20:32:41 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:47.868 20:32:41 -- common/autobuild_common.sh@479 -- $ date +%s 00:19:47.868 20:32:41 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732653161.XXXXXX 00:19:47.868 20:32:41 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732653161.wBofmu 00:19:47.868 20:32:41 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:19:47.868 20:32:41 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:19:47.868 20:32:41 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:19:47.868 20:32:41 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:19:47.868 20:32:41 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:47.868 20:32:41 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:47.868 20:32:41 -- common/autobuild_common.sh@495 -- $ get_config_params 00:19:47.868 20:32:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:19:47.868 20:32:41 -- common/autotest_common.sh@10 -- $ set +x 00:19:47.868 20:32:41 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:19:47.868 20:32:41 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:19:47.868 20:32:41 -- pm/common@17 -- $ local monitor 00:19:47.868 20:32:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:47.868 20:32:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:47.868 20:32:41 -- pm/common@25 -- $ sleep 1 00:19:47.868 20:32:41 -- pm/common@21 -- $ date +%s 00:19:47.868 20:32:41 -- pm/common@21 -- $ date +%s 00:19:47.868 20:32:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732653161 00:19:47.868 20:32:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732653161 00:19:48.128 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732653161_collect-cpu-load.pm.log 00:19:48.128 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732653161_collect-vmstat.pm.log 00:19:49.067 20:32:42 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:19:49.067 20:32:42 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:19:49.067 20:32:42 -- spdk/autopackage.sh@14 -- $ timing_finish 00:19:49.067 20:32:42 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:49.067 20:32:42 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:49.067 20:32:42 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:49.067 20:32:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:19:49.067 20:32:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:49.067 20:32:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:49.067 20:32:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:49.067 20:32:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:19:49.067 20:32:42 -- pm/common@44 -- $ pid=103150 00:19:49.067 20:32:42 -- pm/common@50 -- $ kill -TERM 103150 00:19:49.067 20:32:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:49.067 20:32:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:49.067 20:32:42 -- pm/common@44 -- $ pid=103152 00:19:49.067 20:32:42 -- pm/common@50 -- $ kill -TERM 103152 00:19:49.067 + [[ -n 6164 ]] 00:19:49.067 + sudo kill 6164 00:19:49.076 [Pipeline] } 00:19:49.092 [Pipeline] // timeout 00:19:49.098 [Pipeline] } 00:19:49.112 [Pipeline] // stage 00:19:49.118 [Pipeline] } 00:19:49.133 [Pipeline] // catchError 00:19:49.143 [Pipeline] stage 00:19:49.145 [Pipeline] { (Stop VM) 00:19:49.159 [Pipeline] sh 00:19:49.442 + vagrant halt 00:19:52.730 ==> default: Halting domain... 00:20:00.968 [Pipeline] sh 00:20:01.246 + vagrant destroy -f 00:20:03.782 ==> default: Removing domain... 00:20:04.055 [Pipeline] sh 00:20:04.338 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:04.347 [Pipeline] } 00:20:04.363 [Pipeline] // stage 00:20:04.369 [Pipeline] } 00:20:04.384 [Pipeline] // dir 00:20:04.389 [Pipeline] } 00:20:04.404 [Pipeline] // wrap 00:20:04.413 [Pipeline] } 00:20:04.426 [Pipeline] // catchError 00:20:04.436 [Pipeline] stage 00:20:04.439 [Pipeline] { (Epilogue) 00:20:04.453 [Pipeline] sh 00:20:04.737 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:10.025 [Pipeline] catchError 00:20:10.028 [Pipeline] { 00:20:10.041 [Pipeline] sh 00:20:10.409 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:10.409 Artifacts sizes are good 00:20:10.466 [Pipeline] } 00:20:10.483 [Pipeline] // catchError 00:20:10.496 [Pipeline] archiveArtifacts 00:20:10.503 Archiving artifacts 00:20:10.617 [Pipeline] cleanWs 00:20:10.632 [WS-CLEANUP] Deleting project workspace... 00:20:10.632 [WS-CLEANUP] Deferred wipeout is used... 00:20:10.639 [WS-CLEANUP] done 00:20:10.641 [Pipeline] } 00:20:10.657 [Pipeline] // stage 00:20:10.662 [Pipeline] } 00:20:10.677 [Pipeline] // node 00:20:10.684 [Pipeline] End of Pipeline 00:20:10.719 Finished: SUCCESS